Saturday, August 04, 2012

Windows Azure and Cloud Computing Posts for 8/2/2012+

A compendium of Windows Azure, Service Bus, EAI & EDI,Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image222

image433

Updated 8/4/2012 8:00 AM PDT with new articles marked .

• Updated 8/3/2012 4:30 PM PDT with new articles marked .

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue and Hadoop Services

Doug Mahugh (@dmahugh) reported an update to the Windows Azure Storage plugin for WordPress in an 8/3/2012 post to the Interoperability @ Microsoft blog:

imageThe Windows Azure Storage Plugin for WordPress was updated today to use the new Windows Azure SDK for PHP. The plugin comes with a comprehensive user guide, but for a quick overview of what it does and how to get started, see Brian Swan’s blog post. Cory Fowler also has some good information on how to contribute to the plugin, which is an MS Open Tech open-source project hosted on the SVN repo of the WordPress Plugin Directory.

imageThis plugin allows you to use Windows Azure Storage Service to host the media files for a WordPress blog. I use WordPress on my personal blog where I write mostly about photography and sled dogs, so I installed the plugin today to check it out. The installation is quick and simple (like all WordPress plugins, you just need to copy the files into a folder under your wp-content/plugins folder), and the only setup required is to point it at a storage account in your Windows Azure subscription. Brian’s post has all the details.

The plugin uses the BlobRestProxy class exposed by the PHP SDK to store your media files in Windows Azure blob storage:

Once the plugin is installed, you don’t need to think about it – it does everything behind the scenes, while you stay focused on the content you’re creating. If you’re writing a blog post in the WordPress web interface, you’ll see a new button for Windows Azure Storage, which you can use to upload and insert images into your post:

Brian’s post covers the details of how to upload media files through the plugin’s UI under the new button.

If you click on the Add Media icon (clip_image001) instead, you can add images from the Media Library, which is also stored in your Windows Azure storage account under the default container (which you can select when configuring the plugin).

If you use Windows Live Writer (as I do), you don’t need to do anything special at all to take advantage of the plugin. When you publish from Live Writer the media files will automatically be uploaded to the default container of your storage account, and the links within your post will point to the blobs in that container as appropriate.

To the right is a blog post I created that takes advantage of the plugin. I just posted it from Live Writer as I usually do, and the images are stored in the wordpressmedia container of my dmahughwordpress storage account, with URLs like this one:

http://dmahughwordpress.blob.core.windows.net/wordpressmedia/2012/08/DSC_7914.jpg

Check it out, and let us know if you have any questions. If you don’t have an Azure subscription, you can sign up for a free trial here.


Brian Swan (@brian_swan) expanded on the Windows Azure Storage Plugin for WordPress Updated! topic in an 8/3/2012 post to his [Windows Azure’s] Silver Lining blog:

imageMicrosoft Open Technologies, Inc. (the recently-created Microsoft subsidiary charged with advancing the company’s investment in interoperability, open standards and open source) has updated the Windows Azure Storage Plugin for WordPress. The plugin functionality has not changed (it still allows you to use the Windows Azure Blob Service to store and serve media files), but under the covers the plugin is now using the “new” Windows Azure SDK for PHP. In this post, I’ll show you how get started with the plugin. If you are interested in an overview of the plugin and its architecture, see Windows Azure Storage Plugin for WordPress. If you are interested in playing with the source code, see Update Released: Windows Azure Storage Plugin for WordPress.

Note: You can down load the plugin here: Windows Azure Storage Plugin for WordPress. The download contains a User Guide that covers the details of installation and usage of the plugin. This post will get you started, but refer to the User Guide for more detail.

What does the Windows Azure Storage Plugin for WordPress do?

imageThe Windows Azure Storage Plugin for WordPress allows you to store and serve your media files from the Windows Azure Blob Service. The Windows Azure Blob Service provides storage for binary objects in the cloud and is accessible from any platform. The service creates 3 copies of your files to protect against data loss and is works with REST conventions and standard HTTP operations to create URIs to identify and expose your files. And, depending on your site, using the Blob service may help reduce the cost of running your blog. For more information, see Pricing Details.

What do I need to use the plugin?

Assuming you have WordPress running already, you need four things to use the Windows Azure Storage Plugin for WordPress:

  1. A Windows Azure account. If you don’t have one, you can sign up for a 90-day free trial.
  2. A Windows Azure storage account.
  3. The Windows Azure Storage Plugin for WordPress.

If you don’t have WordPress running already, see How to create and deploy a website, which walks you through the steps of getting WordPress running on Windows Azure Websites.

How do I enable the plugin?

After you have created a container and downloaded the Windows Azure Storage Plugin for WordPress (see the section above), the steps for enabling the plugin are similar to the steps for enabling any other WordPress plugin. I’ll walk you through the steps here:

1. Extract the windows-azure-storage folder from the windows-azure-storage.zip archive and put it in your WordPress plugins directory (usually wordpress/wp-content/plugins). Note that when you unpack the .zip archive, the resulting directory is windows-azure-storage/windows-azure-storage. It is the windows-azure-storage sub-folder that you want.

2. Login to WordPress as an administrator, go to the Plugins tab on your Dashboard, and click the Activate link under Windows Azure Storage Plugin for Wordpress:

image

3. Under the Settings tab, click on Windows Azure. In the resulting dialog, provide your storage account name and private key and check the Use Windows Azure Storage when uploading via WordPress’ upload tab checkbox. Click Save Changes.

image

4. Finally, you need to select a default container for storing your media files. If your storage account does not have any containers, you will see a Create New Container dialog (otherwise the Default Storage Container dropdown will be populated with the container names in your storage account). You can create a new public container by entering a new container name and clicking Create. Then, select the default container from the Default Storage Container dropdown, and click Save Changes again.

Note: If you are creating a new container, container names can only contain letters, numbers, and dashes (-). Letters must be lowercase, and the name must be between 3 and 63 characters.

Your are now ready to use the plugin.

How do I use the plugin?

After you have enabled the plugin, when you create a new post or edit an existing post, you should see the Windows Azure icon:

image

Clicking the icon will open the Windows Azure Storage dialog:

image

From the Browse tab, you can click on an image/video/sound file to insert it into a post. From the Upload tab, you can upload files to your storage container. And, from the Search tab, you can search your storage container for files.

How do I use the plugin with Windows Azure Websites?

If you want to get started with WordPress on Windows Azure, the easiest way to go is to install WordPress on Windows Azure Web Sites from the Application Gallery. This tutorial walks you through all the steps: How to: Create a Website from the Gallery. After you have got WordPress set up, you can install the Windows Azure Storage Plugin for WordPress by doing the following:

1. Use your favorite FTP client to connect to your web site. The FTP hostname and user name are shown on your website dashboard. (See How to Manage Websites for more information.)

2. Extract the windows-azure-storage folder from the windows-azure-storage.zip archive (assuming you have downloaded it per the instructions earlier) and use your FTP client to put it in the WordPress plugins directory (wordpress/wp-content/plugins). Again, note that when you unpack the .zip archive, the resulting directory is windows-azure-storage/windows-azure-storage. It is the windows-azure-storage sub-folder that you want.

3. From your website’s dashboard, stop and start the site, then start with step 2 in the How do I enable the plugin? section above.

That’s it.

Note: if you are starting with a local copy of WordPress and you have the plugin already enabled, you can create a new Windows Azure Web Site with a MySQL database and deploy to it with FTP or Git. (You will, of course, have to update your wp-config.php file with the correct hostname and database connection information.)

As always, we’d love to get feedback on this plugin.


John Furrier (@furrier) reported Breaking: Hadoop Community Votes To Upgrade Hadoop Core with YARN in an 8/3/2012 post to the DevOpsANGLE blog:

imageThe Hadoop community voted last night to upgrade the core Hadoop software by upgrading YARN to a full blown Apache sub-project for Hadoop along with HDFS and MapReduce.

Hadoop YARN is basically MapReduce upgrade in an attempt to take Apache Hadoop beyond MapReduce for data-processing.

imageAccording to Hortonworks cofounder Arun Murthy blog post, Apache Hadoop YARN joins Hadoop Common (core libraries), Hadoop HDFS (storage) and Hadoop MapReduce (the MapReduce implementation) as the sub-projects of the Apache Hadoop which, itself, is a Top Level Project in the Apache Software Foundation. Until this milestone, YARN was a part of the Hadoop MapReduce project and now is poised to stand up on it’s own as a sub-project of Hadoop.

imageThis is a huge win for Hortonworks who have established themselves as the emerging leader with Cloudera in being a credible steward for stable and open Hadoop. The marketplace is looking for confidence in the stability and maturity of Apache Hadoop as many organizations have already been successful driving real business and technical value.

According to SiliconANGLE.tv theCUBE alum and coFounder Arun Murthy announces the new project in a blog post. Here is his announcement:

As folks are aware, Hadoop HDFS is the data storage layer for Hadoop and MapReduce was the data-processing layer. However, the MapReduce algorithm, by itself, isn’t sufficient for the very wide variety of use-cases we see Hadoop being employed to solve. With YARN, Hadoop now has a generic resource-management and distributed application framework, where by, one can implement multiple data processing applications customized for the task at hand. Hadoop MapReduce is now one such application for YARN and I see several others given my vantage point – in future you will see MPI, graph-processing, simple services etc.; all co-existing with MapReduce applications in a Hadoop YARN cluster.

Implications for the Apache Hadoop Developer community

I’d like to take a brief moment to walk folks through the implications of making Hadoop YARN as a sub-project, particularly for members of the Hadoop developer community.

We will now see a top-level hadoop-yarn-project source folder in Hadoop trunk.
We will now use a separate jira project for issue tracking for YARN i.e. https://issues.apache.org/jira/browse/YARN
We will also use a new yarn-dev@hadoop.apache.org mailing list for collaboration.
We will continue to co-release a single Apache Hadoop release that will include the Common, HDFS, YARN and MapReduce sub-projects.
If you would like to play with YARN please download the latest hadoop-2 release from the ASF and start contributing – either to core YARN sub-project or start building your cool application on top!

Please do remember that hadoop-2 is still deemed alpha quality by the Apache Hadoop community, but YARN itself shows a lot of promise and we are excited by the future possibilities!

Conclusion

Overall, having Hadoop YARN as a sub-project of Apache Hadoop is a significant milestone for Hadoop several years in the making. Personally, it is very exciting given that this journey started more than 4 years ago with https://issues.apache.org/jira/browse/MAPREDUCE-279. It’s a great pleasure, and honor, to get to this point by collaborating with a fantastic community that is driving Apache Hadoop.

What Does This Mean for the Hadoop Community:

Hadoop has established itself as the big data platform of choice. Many big organizations are moving to Hadoop and those who don’t have a big data strategy will be left behind.

  • Open Source Innovation v Stability: As an open source technology matures and becomes mainstream, it becomes increasingly important to balance community innovation and enterprise stability. Core Apache Hadoop has reached stability and has been proven in large-scale deployments. It can be trusted and it is no longer necessary to rely on the bleeding edge development lines of Hadoop
  • Apache Hadoop Platform Completeness: Apache Hadoop with core set of related projects presents a wide array of functions to enable the ecosystem, ease operations and empower the developer with enterprise ready tools
  • Apache Hadoop Maturity: Apache Hadoop has come a long way. Trusted and test versions of Hadoop is very important. Additionally, upgrades like YARN to the core is a great example of balancing the innovation and stability
  • Community Stewardship: the Apache Hadoop community continues to push the platform forward. Hortonworks and Cloudera are working closely with the community and are stewards of the core so that it remains a viable solution for the enterprise, but they are also innovators at the edge to advance Hadoop further.

The trend is that half the world’s data will be processed by Apache Hadoop. The Hadoop community continues to revolutionize and commoditize the storage and processing of big data via open source. The major focus needs to be on the scale and adoption of Apache Hadoop. All the players are extending and dedicating significant engineering resources to make Apache Hadoop more robust and easier to integrate, extend, deploy and use.

Hadoop continues to be the big data platform of choice and upgrades will come. I’m looking forward to more conversations around this between now and Hadoop World and Strata this fall in NYC October 23-25th.

More info on Hadoop World / Strata here – http://strataconf.com/

Here is my interview with the cofounder of Hortonworks Arun Murthy at Hadoop Summit this past June.

Part I:

Part II:


• Haishi Bai (@HaishiBai2010) recommended Saving storage space on Windows Azure Blog Storage with Dynamic File Slicing in an 8/3/2012 post:

imageWindows Azure Blog Storage is cheap – with local redundancy you need to pay only $9.30 per 100 GB data you store (Azure pricing calculator, August 2012). The price is so low that in many cases you don’t bother to save spaces. However, if you have extremely large volume of data, or you want to provide services that leverage storage services heavily, saving space means real savings and competitive edge over other service providers.

imageIn this post I’m going to present an idea that utilize dynamic file slicing to significantly reduce storage space consumption. The solution works best when you have lots of “similar” documents to be saved. Many businesses need to archive multiple versions of the same document. Although the deltas between versions might be minor, the whole document is preserved multiple times, wasting storage space. It would be nice if we could just save the deltas. In addition, many businesses use standard templates for their documents. If we could save the common parts of a template and share them among all documents that use the template, we can reduce space consumption significantly.

The idea is simple – before we upload a file to blog storage, we’ll smartly slice the file into blocks. Then, we’ll only upload the blocks that have not been saved before. We’ll save the block list somewhere, and when user requests the file, we’ll reassemble the file using the blocks in the list. Many benefits of this approach should be apparent:

  • Save space. Multiple version of the same document can potentially share many common blocks. The same applies to documents that share common templates.
  • Reduce network traffic. Because we are only uploading blocks that are not saved before, we avoid repeatedly uploading same contents over and over again.
  • Increased performance. Less I/O means faster speed. In addition, commonly used blocks can be cached locally, making data acquisition even faster.
  • Increased profitability. If you could provide users 100G of storage using only 50G physical storage, that’s a win!

Shouldn’t simple file compression can save us lots of space as well? True. And the idea presented here doesn’t exclude compressed files. However, in the case of many similar documents, the space saving using this method is more significant. For example, let’s say we have 1,000 1M documents that are mostly similar. We slice the document to 10 100K blocks, and each document has only 2 unique blocks. Then the total space we need is 8 * 100K + 1000 * 2 * 100K = 196M. On the other hand, if we use a compression algorithm that can compress files to 40% of their original sizes, we still need 400M.

Obviously the slicing algorithm places a key role in this idea. The algorithm needs to be fast and be smart enough to slice files based on contents instead of fixed block sizes. Why? if we slice the files by fixed block sizes, removing or adding a single byte in the document will cause all the blocks after the byte change. I wrote a blog post a while ago discussing one of such algorithms. The blocks are identified by their hash signatures, looking for matching blocks is a simple matter of hash comparison. The block hashes can be cached locally to further reduce search time. Of course, there’s a low possibility that hash codes may collide. Constraining hash matching by file types and even by customer organizations can easily and effectively reduce the possibility of collision.

In the scenario of collaborative editing, dynamic file slicing allows documents to be synced more quickly. For instance, the “hot” blocks that are under active editing can be loaded into in-memory cache so changes can be shared and merged quickly, providing much more fluent collaborative editing experience.

At last, for businesses that need document content to be encrypted, encryption/decryption operations, which are expensive, can be reduced because the shared blocks only need to be encrypted/decrypted once.


<Return to section navigation list>

SQL Azure Database, Federations and Reporting

• Han of the Microsoft Sync Team posted a Preview 6 Client Agent Upgrade Notification on 8/2/2012:

imageWe plan to deploy the next Preview release (SU6) during the week of 8/13. For the release, there will be a new agent. The current client agent (SU5) will expire on 8/31/2012. Please upgrade your agent accordingly.

The message is rather terse. I requested a link to documentation for “upgrading your agent accordingly.”


Cihan Biyikoglu (@cihangirb) posted a Newsflash: Reference Data Management Simplified in Federations with the July 2012 Update to SQL Database! Reference Data Replication with Data Sync Service, Unique ID Generation for Reference Data and more… on 7/27/2012 (missed when published):

imageLong title this time…. I realize it is hard to keep track of the services updates as we release updates frequently to the Azure SQL Database service. For federations, in the July 2012 update, we have made some improvements to the federation reference data experience and eased some of the restrictions. I’ll talk about 3 important scenarios we enabled with the update;

imageReplication of Reference Data In Federations using SQL Data Sync: One exciting news is that with the July 2012 update, you will also be able to use SQL Data Sync with federation members. SQL Data Sync does not have a native experience for Federations just yet but given federation members are just databases with their own database names, you can refer to them from the SQL Data Sync.

This has been quite a popular ask from many customers in the past: some customer would like to use SQL Data Sync to replicate all data in one federation member to another database/federation member in some other geography or server in the cloud, others wanted to consolidate data from many/all federation members into a single central database, even to a single SQL Server database on premise. With the recent update, these and many other topologies you can imagine with regular databases are all possible with data sync service for federation member databases… One other great use can be to use SQL Data Sync with Federations to replicate reference data across federation members. Here is a quick picture and walkthrough of the setup for synchronizing the language codes across 30 federation members. In my case I wanted the topology to have the root database as a hub and all members defined as regular edge databases.

  • Create a “sync server” and a “sync group” called sync_codes
  • Add the root database as the hub database; blogsrus_db with conflict resolution set to “Hub Wins” and schedule set to every 5 mins.
  • Define the Sync dataset as the dbo.language_code_tbl

image

  • Add federation member databases into the sync_codes sync group.

image

With this setup, replication happens bi-directionally. This means, I can update any one of the federation member dbs and the changes will first get replicated to the root db copy of my reference table and then will be replicated to all other federation member dbs automatically by SQL Data Sync. SQL Data Sync provides powerful control over the direction of data flow and conflict resolution to create the desired topology for syncing reference data in federation members.

image

There are a few limitation to be aware of SQL Data Sync however; First the service has 5 min as its lowest latency for replication. There is no scripting support for set up of the data sync relationships. This means you will need to populate all the db names through the UI by hand. SQL Data sync also does not allow synchronization between more than 30 databases in sync groups in a single sync server at the moment. You can only create a single sync server with DSS today. SQL Data Sync is currently in preview mode and is continuously collecting feedback. Vote for your favorite request or add a new one at SQL Data Sync Feature Voting website!

Optimistic Concurrency Control with Reference Data: Timestamp data type is used as the based of implementing optimistic concurrency control for many of the modern apps. Developers build custom behavior to detect conflicting updates using the timestamp type. This is especially a key conflict detection mechanism for managing reference data in federations as the eventually consistent reference data received updates from many sources. Well, we have heard strong customer feedback around these restrictions and with the recent update to SQL Database removed the restriction on the timestamp data type in reference tables in federation members. You can now use timestamp or rowversion column type in reference tables.

Here is a basic use case showing why timestamp can be a powerful tool especially for reference data that is replicated across multiple members; lets say you have a products reference table that you need to update and you use the data sync service or some script rollout in the background to do this update on all members. Also assume that your application is concurrently doing updates to the products table as part of its regular OLTP workload. How do you detect conflicts across these updates and avoid a lost-update problem? You can add the timestamp property to the products table and check to ensure the products row you are about to change has the identical timestamp value at the time you read the row. That ensures that this particular product you are about the update has not been updated in between the time you read and came back to update it…

Simplified Unique ID Generation for Reference Data with Identity Property: Another strong feedback came from customer on the identity property. With the update in July, we have removed the restrictions on identity property on reference tables in federation members. This should minimize some of the schema changes we require for migrating existing database schemas over to federations. One less thing to worry about…

Here is a quick example of the type of table that will be possible to create in federation members with the updated SQL Database. This used to be restricted in previous releases;

create table ref_table(c1 int primary key identity, c2 timestamp)
go
insert into ref_table(c2) values (default)
insert into ref_table(c2) values (default)
insert into ref_table(c2) values (default)

go
select * from ref_table

returns:

c1 c2
----------- ------------------
1 0x000000000000001A
2 0x000000000000001D
3 0x000000000000001E

I should note that identity and timestamp is still restricted on federated tables in federation members. The main restriction is due to the MERGE operation we would like to introduce in future that will be the opposite of SPLIT command. When datasets are merged, we expect to have conflicts in the identity values and timestamp values we generated previously in 2 separate federation members. We are working on improvements to improve the overall experience with timestamp and identity in future on federated tables. Until then, for generating unique IDs and optimistic concurrency management for federated tables, you can refer to this post on the topic for workaround and suggestions: ID Generation in Federations.

As always love to hear feedback on these experiences and opinions on Federations.


<Return to section navigation list>

MarketPlace DataMarket, Cloud Numerics, Big Data and OData

• Safari Books Online offers free access to Pakt Publishing’s OData Programming Cookbook for .NET Developers by Steven Cheng, a Senior Support Engineer at Microsoft CSS, China, with a trial subscription:

imageWhat this book covers

Chapter 1, Building OData Services, introduces how we can use WCF Data Services to create OData services based on various kind of data sources such as ADO.NET Entity Framework, LINQ to SQL, and custom data objects.

Chapter 2, Working with OData at Client Side, shows how to consume OData services in client applications. This will cover how we can use strong-typed client proxy, WebRequest class, and unmanaged code to access OData services. You will also learn how to use OData query options, asynchronous query methods, and other client-side OData programming features.

Chapter 3, OData Service Hosting and Configuration, discusses some typical OData service hosting scenarios including IIS hosting, custom .NET application hosting, and Windows Azure cloud hosting. This chapter also covers some service configuration scenarios such as applying basic access rules, exposing error details, and enabling HTTP compression.

image

Steven Cheng published a detailed Data on Mobile Devices post to Packt Publishing’s blog. From the Introduction:

OData (Open Data Protocol) is a web protocol for querying and updating data, which can be freely incorporated in various kind of data access applications. OData makes it quite simple and flexible to use by applying and building upon existing well-defined technologies such as HTTP, XML, AtomPub, and JSON. WCF Data Services (formerly known as ADO.NET Data Services) is a well-encapsulated component for creating OData services based on the Microsoft .NET Framework platform. It also provides a client library with which you can easily build client applications that consume OData services. In addition to WCF Data Services, there are many other components or libraries, which make OData completely available to the non-.NET or even non-Microsoft world.

In this article by Steven Cheng, author of OData Programming Cookbook for .NET Developers, we will cover:

  • OData Programming Cookbook for .NET DevelopersAccessing OData service with OData WP7 client library
  • Creating Panorama-style, data-driven Windows Phone applications with OData
  • Using HTML5 and OData to build native Windows Phone applications

With the continuous evolution of mobile operating systems, smart mobile devices (such as smartphones or tablets) play increasingly important roles in everyone's daily work and life. The iOS (from Apple Inc., for iPhone, iPad, and iPod Touch devices), Android (from Google) and Windows Phone 7 (from Microsoft) operating systems have shown us the great power and potential of modern mobile systems.

In the early days of the Internet, web access was mostly limited to fixed-line devices. However, with the rapid development of wireless network technology (such as 3G), Internet access has become a common feature for mobile or portable devices. Modern mobile OSes, such as iOS, Android, and Windows Phone have all provided rich APIs for network access (especially Internet-based web access). For example, it is quite convenient for mobile developers to create a native iPhone program that uses a network API to access remote RSS feeds from the Internet and present the retrieved data items on the phone screen. And to make Internet-based data access and communication more convenient and standardized, we often leverage some existing protocols, such as XML or JSON, to help us. Thus, it is also a good idea if we can incorporate OData services in mobile application development so as to concentrate our effort on the main application logic instead of the details about underlying data exchange and manipulation.

In this article, we will discuss several cases of building OData client applications for various kinds of mobile device platforms. The first four recipes will focus on how to deal with OData in applications running on Microsoft Windows Phone 7. And they will be followed by two recipes that discuss consuming an OData service in mobile applications running on the iOS and Android platforms. Although this book is .NET developer-oriented, since iOS and Android are the most popular and dominating mobile OSes in the market, I think the last two recipes here would still be helpful (especially when the OData service is built upon WCF Data Service on the server side). …

Steven continues with detailed examples of and source code for:

  • Accessing OData service with OData WP7 client library
  • Consuming JSON-format OData service without OData WP7 client library
  • Creating Panorama style data-driven Windows Phone application with OData


Ronnie Hoogerwerf (@rhoogerw) reported availability of a Microsoft Codename “Cloud Numerics” Lab Refresh on 8/2/2012:

imageToday we are announcing a refresh of the Microsoft Codename "Cloud Numerics" Lab. We want to thank everyone who participated in the initial Lab, we listened to your feedback to make improvements and add exciting new features. Your continued feedback and participation is what makes this lab a success! Thank you.

Here’s what is new in the refresh of the Cloud Numerics Lab:

imageImproved user experience: through more actionable exception messages, a refactoring of the probability distribution function APIs, and better and more actionable feedback in the deployment utility. In addition the deployment process time has decreased and the installer supports installation on a on-premises Windows HPC Cluster. All up, this refresh provides for a more efficient way of writing and deploying Cloud Numerics applications to Windows AzureTM.

More scale-out enabled functions: more algorithms are enabled to work on distributed arrays. This significantly increases the breadth and depth of big data algorithms that can be developed using Cloud Numerics. Scale-out functionality was added in the following areas: Fourier Transforms, Linear Algebra, Descriptive Statistics, Pattern Recognition, Random Sampling, Similarity Measures, Set Operations, and Matrix Math.

Array indexing and manipulation: a large part of any data analytics application concerns handling and preparing data to be in the right shape and have the right content. With this refresh Cloud Numerics adds advanced array indexing enabling users to easily and efficiently set and extract subsets of arrays and to apply boolean filters.

Sparse data structures and algorithms: much of the real-world big data sets are sparse, i.e., not every field in a table has a value. With this refresh of the lab we introduce a distributed sparse matrix structure to hold these datasets and introduce core sparse linear algebra functions enabling scenarios such as document classification, collaborative filtering, etc.

Apply/Sweep framework: in addition to the built-in parallelism the Cloud Numerics Lab, this refresh now exposes a set of APIs to enable embarrassingly parallel patterns. The Apply framework enables applying arbitrary serializable .NET code to each element of an array or to each row or column of an array. The framework also provides a set of expert level interfaces to define arbitrary array splits. The Sweep framework performs as its name implies – this framework enables distributed parameter sweeps across a set of nodes allowing for better execution times.

Improved IO functionality: we added more parallel readers to enable out of the box data ingress from Windows Azure storage and introduced parallel writers.

Documentation: we introduced detailed mathematical descriptions of more than half of the algorithms using print-quality formulae and best-of-web equation rendering that help clarify algorithm mathematical definition and implementation detail. In addition we added to the “Getting Started” wiki, we added conceptual documentation for the Cloud Numerics help, including the programming model, the new apply framework, IO, and so on.

Stay tuned for upcoming blog posts

  • F#: We’ll be distributing a F# add-in for Cloud Numerics soon. The add-in exposes the Cloud Numerics APIs in a more functional manner, introduces operators, such as matrix multiply, and F# style constructors for and indexing on Cloud Numerics arrays.
  • Text analytics using sparse data structures

Do you want to learn more about Microsoft Codename “Cloud Numerics” Lab? Please visit us on our SQL Azure Labs home page, take a deeper look at the Getting Started material and Sign Up to get access to the installer. Let us know what you think by sending us email at cnumerics-feedback@microsoft.com.

The Cloud Numerics refresh depends on the newly released Azure SDK 1.7 and Microsoft HPC Server R2 SP4. It does not provide support for the Visual Studio 2012 RC. [Emphasis added.]


Tip: You’ll probably receive the following message when you start the MicrosoftCloudNumerics.msi installer for v0.2:

The following prerequisites for installing and using Microsoft "Cloud Numerics" are missing:

Please rerun the Microsoft "Cloud Numerics" installer after installing these prerequisite packages.

You must uninstall previous versions of the preceding components before installing them.

Here are links to OakLeaf posts about the previous “Cloud Numerics” version:


image_thumb15_thumbNo significant articles today.


<Return to section navigation list>

Windows Azure Service Bus, Access Control Services, Caching, Active Directory and Workflow

Brent Stineman (@BrentCodeMonkey) described the Local File Cache in Windows Azure in a 7/2/2012 post:

imageWhen creating a traditional on-premise application, it’s not uncommon to leverage the local file system as a place to store temporary files and thus increase system performance. But with Windows Azure Cloud Services, we’ve been taught that we shouldn’t write things to disk because the virtual machines that host our services aren’t durable. So we start going to remote durable storage for everything. This slows down our applications so we need to add back in some type of cache solution.

imagePreviously, I discussed using the Windows Azure Caching Preview to create a distributed, in-memory cache. I love that we finally have a simple way to do to this. But there are times when I think that caching something, for example an image file that doesn’t change often, within a single instance would be fine, especially if I don’t have to use up precious RAM on my virtual machines.

Well there is an option! Windows Azure Cloud Services all include, at no additional cost, an allocation of non-durable local disk space called surprisingly enough “Local Storage”. For each core you get 250gb of essentially temporary disk space. And with a bit of investment, we can leverage that space as a local, file backed cache.

Extending System.Runtime.Caching

So .NET 4.0 introduced the System.Runtime.Caching namespace along with a template base class ObjectCache that can be extended to provide caching functionality with whatever storage system we want to use. Now this namespace also provides a concrete implementation called MemoryCache, but we want to use the file system. So we’ll create our own implementation called FileCache class.

Note: There’s already a codeplex project that provides a file based implementation of ObjectCache. But I still wanted to role my own for the sake of explaining some of the challenges that will arise.

So I create a class library and add a reference to System.Runtime.Caching. Next up, let’s rename the default class “Class1.cs” to “FileCache.cs”. Lastly, inside of the FileCache class, I’ll add a using statement for the Caching namespace and make sure my new class inherits from ObjectCache.

Now if we try to build the class library now, things wouldn’t go very well because there are 18 different abstract members we need to implement. Fortunately I’m running the Visual Studio Power Tools so it’s just a matter of right-clicking on ObjectCache where I indicated I’m inheriting from it and selecting the “Implement Abstract Class”. This gives us shells for all 18 abstract members, but until we add some real implementation in, our FileCache class won’t even be minimally useful.

I’ll start by fleshing out the Get method and adding a public property, CacheRootPath, to the class that designates where our file cache will be kept.

public string CacheRootPath
{
    get { return cacheRoot.FullName; }
    set
    {
        cacheRoot = new DirectoryInfo(value);
        if (!cacheRoot.Exists) // create if it doesn't exist
            cacheRoot.Create();
    }
}

public override bool Contains(string key, string regionName = null)
{
    string fullFileName = GetItemFileName(key,regionName);
    FileInfo fileInfo = null;

    if (File.Exists(fullFileName))
    {
        fileInfo = new FileInfo(fullFileName);

        // if item has expired, don't return it
        //TODO: 
        return true;
    }
    else
        return false;
}

// return type is an object, but we'll always return a stream
public override object Get(string key, string regionName = null)
{
    if (Contains(key, regionName))
    {
        //TODO: wrap this in some exception handling
        MemoryStream memStream = new MemoryStream();
        FileStream fileStream = new FileStream(GetItemFileName(key, regionName), FileMode.Open);
        fileStream.CopyTo(memStream);
        fileStream.Close();

        return memStream;
    }
    else
        return null;
}

CacheRootPath is just a way for us to set the path to where our cache will be stored. The Contains method is a way to check and see if the file exists in the cache (and ideally should also be where we check to make sure the object isn’t expired), and the Get method leverages Contains to see if the item exists in the cache and retrieves it if it exists.

Now this is where I had my fist real decision to make. Get must return an object, but what type of object should I return. In my case I opted to return a memory stream. Now I could have returned a file stream that was attached to the file on disk, but because this could lock access to file, I wanted to have explicit control of that stream. Hence I opted to copy the file stream to a memory stream and return that to the caller.

You may also note that I left the expiration check alone. I did this for the demo because your needs for file expiration may differ. You could base this on FileInfo.CreationTimeUTC, or FileInfo.LastAccessTimeUTC. both are valid as may be any other meta data you need to base it on. I do recommend one thing, make a separate method that does the expiration check. We will use it later.

Note: I’m specifically calling out the use of UTC. When in Windows Azure, UTC is your friend. Try to use it whenever possible.

Next up, we have to shell out the three overloaded versions of AddOrGetExisting. These methods are important because even though I won’t be directly accessing them in my implementation, they are leveraged by base cass Add method. And thus, these methods are how we add items into the cache. The first two overloaded methods will call the lowest level implementation.

public override object AddOrGetExisting(string key, object value, CacheItemPolicy policy, string regionName = null)
{
    if (!(value is Stream))
        throw new ArgumentException("value parameter is not of type Stream");

    return this.AddOrGetExisting(key, value, policy.AbsoluteExpiration, regionName);
}

public override CacheItem AddOrGetExisting(CacheItem value, CacheItemPolicy policy)
{
    var tmpValue = this.AddOrGetExisting(value.Key, value.Value, policy.AbsoluteExpiration, value.RegionName);
    if (tmpValue != null)
        return new CacheItem(value.Key, (Stream)tmpValue);
    else
        return null;
}

The key item to note here is that in the first method, I do a check on the object to make sure I’m receiving a stream. Again, that was my design choice since I want to deal with the streams.

The final overload is where all the heavy work is…

public override object AddOrGetExisting(string key, object value, DateTimeOffset absoluteExpiration, string regionName = null)
{
    if (!(value is Stream))
        throw new ArgumentException("value parameter is not of type Stream");

    // if object exists, get it
    object tmpValue = this.Get(key, regionName);
    if (tmpValue != null)
        return tmpValue;
    else
    {
        //TODO: wrap this in some exception handling

        // create subfolder for region if it was specified
        if (regionName != null)
            cacheRoot.CreateSubdirectory(regionName);

        // add object to cache
        FileStream fileStream = File.Open(GetItemFileName(key, regionName), FileMode.Create);

        ((Stream)value).CopyTo(fileStream);
        fileStream.Flush();
        fileStream.Close();

        return null; // successfully added
    }
}

We start by checking to see if the object already exists and return it if found in the cache. Then we create a subdirectory if we have a region (region implementation isn’t required). Finally, we copy the value passed in to our file and save it. There really should be some exception handling in here to make sure we’re handling things in a way that’s a little more thread save (what if the file gets created between when we check for it and start the write). And the get should be checking to make sure the file isn’t already open when doing its read. But I’m sure you can finish that out.

Now there’s still about a dozen other methods that need to be fleshed out eventually. But these give us our basic get and add functions. What’s still missing is handling evictions from the cache. For that we’re going to use a timer.

public FileCache() : base()
{
    System.Threading.TimerCallback TimerDelegate = new System.Threading.TimerCallback(TimerTask);

    // time values should be based on polling interval
    timerItem = new System.Threading.Timer(TimerDelegate, null, 2000, 2000);
}

private void TimerTask(object StateObj)
{
    int a = 1;
    // check file system for size and if over, remove older objects


    //TODO: check polling interval and update timer if its changed
}

We’ll update the FileCache constructor to create a delegate using our new TimerTask method, and pass that into a Timer object. This will execute the TimeTask method and regular intervals in a separate thread. I’m using a hard-coded value, but we really should check to see we have a specific polling interval set. Course we should also put some code into this method so it actually does things like check to see how much room we have in the cache and evict expired items(by checking via the private method I suggested earlier), etc…

The Implementation

With our custom caching class done (well not done but at least to a point where its minimally functional), its time to implement it. For this, I opted to setup an MVC Web Role that allows folks to upload an image file to Windows Azure Blob storage. Then, via a WCF/REST based service, it would retrieve the images twice. The first retrieval would be without using caching, the second would be with caching. I won’t bore you with all the details of this setup, so we’ll focus on just the wiring up of our custom FileCache.

We start appropriately enough with the role’s Global.asax.cs file where we add public property that represents out cache (so its available anywhere in the web application):

public static Caching.FileCache globalFileCache = new Caching.FileCache();

And then I update the Application_Start method to retrieve our LocalResource setting and use it to set the CacheRootPath property of our caching object.

protected void Application_Start()
{
    AreaRegistration.RegisterAllAreas();

    RegisterGlobalFilters(GlobalFilters.Filters);
    RegisterRoutes(RouteTable.Routes);

    Microsoft.WindowsAzure.CloudStorageAccount.SetConfigurationSettingPublisher(
        (configName, configSetter) =>
            configSetter(RoleEnvironment.GetConfigurationSettingValue(configName))
    );

    globalFileCache.CacheRootPath = RoleEnvironment.GetLocalResource("filecache").RootPath;
}

Now ideally we could make it so that the CacheRootPath instead accepted the LocalResource object returned by GetLocalResource. This would then also mean that our FileCache could easily manage against the maximum size of the local storage resource. But I figured we’d keep any Windows Azure specific dependencies out of this base class and maybe later look at creating a WindowsAzureLocalResourceCache object. But that’s a task for another day.

Ok, now to wire up the cache into the service that will retrieve the blobs. Lets start with the basic implementation:

public Stream GetImage(string Name, string container, bool useCache)
{
    Stream tmpStream = null; // could end up being a filestream or a memory stream

    var account = CloudStorageAccount.FromConfigurationSetting("ImageStorage"); 
    CloudBlobClient blobStorage = account.CreateCloudBlobClient();
    CloudBlob blob = blobStorage.GetBlobReference(string.Format(@"{0}/{1}", container, Name));
    tmpStream = new MemoryStream();
    blob.DownloadToStream(tmpStream);

    WebOperationContext.Current.OutgoingResponse.ContentType = "image/jpeg";
    tmpStream.Seek(0, 0); // make sure we start the beginning
    return tmpStream;
}

This method takes the name of a blob and its container, as well as a useCache parameter (which we’ll implement in a moment). It uses the first two values to get the blob and download it to a stream which is then returned to the caller with a content type of “image/jpeg” so it can be rendered by the browser properly.

To implement our cache we just need to add a few things. Before we try to set up the CloudStorageAccount, we’ll add these lines:

// if we're using the cache, lets try to get the file from there
if (useCache)
    tmpStream = (Stream)MvcApplication.globalFileCache.Get(Name);

if (tmpStream == null)
{

This code tries to use the globalFileCache object we defined n the Global.asax.cs file and retrieve the blob from the cache if it exists, providing we told the method useCache=true. If we couldn’t find the file (tmpStream == null), we’ll then fall into the block we had previously that will retrieve the blob image and return it.

But we still have to add in the code to add the blob to the cache. We’ll do right after we DownloadToStream:

    // "fork off" the adding of the object to the cache so we don't have to wait for this
    Task tsk = Task.Factory.StartNew(() =>
    {
        Stream saveStream = new MemoryStream();
        blob.DownloadToStream(saveStream);
        saveStream.Seek(0, 0); // make sure we start the beginning
        MvcApplication.globalFileCache.Add(Name, saveStream, new DateTimeOffset(DateTime.Now.AddHours(1)));
    });
}

This uses an async task to add the blob to the cache. We do this with asynchronously so that we don’t block returning the blob back to the requestor while the write to disk completes. We want this service to return the file back as quickly as possible.

And that does it for our implementation. Now to testing it.

Fiddler is your friend

Earlier, you may have found yourself saying “self, why did he use a service for his implementation”. I did this because I wanted to use Fiddler to measure the performance of calls to retrieve the blob with and without caching. And by putting it in a service and letting fiddler monitor the response times, I didn’t have to write up my own client and put timings around it.

To test my implementation, I fired up fiddler and then launched the service. We should see calls in Fiddler to SimpleService.svc/GetImage, one with cache=false and one with cache=true. If we select those items, and select the Statistics tab, we should see some significant differences in the “Overall Elapsed” times of each call. In my little tests, I was seeing anywhere from a 50-90% reduction in the elapsed time.

image

In fact, if you run the tests several times by hitting refresh on the page, you may even notice that the first time you hit Windows Azure storage for a particular blob, you may have additional delay compare to subsequent calls. Its only a guess but we may be seeing Windows Azure storage doing some of its own internal caching there.

So hopefully I’ve described things well enough here and you can follow what we’ve done. But if not, I’m posting the code for you to reuse. Just make sure you update the storage account settings and please please please finish the half started implementation I’m providing you.


Manu Cohen-Yashar (@manukahn) posted Explain timeouts on Windows AppFabric Cache on 8/2/2012:

imageI had many customers complaining about performance degradation, timeout errors and other exceptions they got when using Windows AppFabric Cache.

When digging into the logs we found three popular Microsoft.ApplicationServer.Caching.DataCacheException errors:

  1. ErrorCode<ERRCA0018>:SubStatus<ES0001>:The request timed out.
  2. ErrorCode<ERRCA0017>:SubStatus<ES0006>:There is a temporary failure.
  3. ErrorCode<ERRCA0016>:SubStatus<ES0001>:The connection was terminated.

imageTo learn about the server condition I run the Get-CacheClusterHealth Windows PowerShell Command as described in the server Health Monitoring document.

To verify the client situation I run the following command : Netstat –nat | find “22233” | wc –l

This tells us how many connection the client is trying to establish. If we get large numbers (more than 50) it means that there is a situation of: client network contention. The client is trying to establish too many connection yet someone blocks the client from establishing them.

We can also look at WCF performance counters and search for the numbers of connections.

To fix client network contention we have to configure some throttling configuration:

AppFabric client config:
<dataCacheClient requestTimeout="15000" channelOpenTimeout="3000" maxConnectionsToServer="1000"…

when using the cache on http channel for example in Azure Cache it is required to configure ServicePointManager as well. so In each client make sure this is called on start:

    ServicePointManager.UseNagleAlgorithm = false;
    ServicePointManager.Expect100Continue = false;
    ServicePointManager.SetTcpKeepAlive(false);
    ServicePointManager.DefaultConnectionLimit = 1000;

Now there will be no bottleneck in the client, no contention and no timeouts.


Dan Pastina reported AD RMS SDK 2.0 now available on MSDN Library site on 8/1/2012:

imageAs we mentioned a few weeks back there is a lot of great work that is happening to support and enable AD [Rights Management Services] (RMS) in the developer tools space.

In case you missed it, here is a link to where I announced the official release of the AD RMS SDK and AD RMS Client 2.0.

The AD RMS SDK 2.0 enables developers to build applications that can work with AD RMS Client 2.0 to handle complex security practices such as key management, encryption and decryption processing and it offers a much more simplified API for easy application development. If you have tried in the past and found it difficult to build AD RMS-aware applications, we hope you will be pleased with the refinements and improvements that our 2.0 offering brings to the table.

And today, I'm further pleased to see that all of our latest content in the AD RMS SDK 2.0 is now published and available online in the MSDN Library site here: http://msdn.microsoft.com/en-us/library/hh535290(VS.85).aspx.

Hop over there and check it out if you like and feel free to pass the word on to any of your friends and associates who might be interested in developing AD RMS applications.


<Return to section navigation list>

Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN

Nuno Godinho (@NunoGodinho) described the Service Bus 1.0 Beta and linked to his blog in a 8/3/2012 post to Red Gate Software’s ACloudyPlace blog:

imageWe all know about Windows Azure Service Bus and how it allows features like Relay, Messaging and even integration. Personally this is one of my favorite features in Windows Azure but all this only worked in the Cloud and sometimes the same features and capabilities would be very good if they were available On-Premises also. That’s exactly what Service Bus 1.0 is, it’s getting those capabilities and features to the On-Premises world also, making it easier to have a complete parity between Cloud and On-Premises.

imageOf course this version isn’t still the full version of what we see on the Windows Azure version but will help a lot, I’m sure. This Beta is what the Team calls “Service Bus Messaging Engine” release as stated by Clemens Vasters in one of his responses at StackOverflow, and so this version still doesn’t have any features related to the Relay part, but only with the Messaging part.

Another very important part of this release is that by using it we’ll start getting the same (or at least very similar at this moment) API for both Cloud and On-Premises versions, which make things a lot easier when we build solutions that need to be deployed in both.

Read the rest of Nuno’s article on his blog

Full disclosure: I’m a paid contributor to Red Gate Software’s ACloudyPlace blog.


Maarten Balliauw (@maartenballiauw) posted a preface and link to his Hands-on Windows Azure Services for Windows post on 8/1/2012 to Red Gate Software’s ACloudyPlace blog:

imageA couple of weeks ago, Microsoft announced their Windows Azure Services for Windows Server. If you’ve ever heard about the Windows Azure Appliance (which is vaporware imho), you’ll be interested to see that the Windows Azure Services for Windows Server are in fact bringing the Windows Azure Services to your datacenter. It’s still a Technical Preview, but I took the plunge and installed this on a bunch of virtual machines I had lying around. In this post, I’ll share you with some impressions, ideas, pains and speculations.

imageWhy would you run Windows Azure Services in your own datacenter? Why not! You will make your developers happy because they have access to all services they are getting to know and getting to love. You’ll be able to provide self-service access to SQL Server, MySQL, shared hosting and virtual machines. You decide on the quota. And if you’re a server hugger like a lot of companies in Belgium: you can keep hugging your servers. I’ll elaborate more on the “why?” further in this blog post.

imageNote: Currently only SQL Server, MySQL, Web Sites and Virtual Machines are supported in Windows Azure Services for Windows Server. Not storage, not ACS, not Service Bus, not…

Read the rest at blog.maartenballiauw.be

I updated on 8/2/2012 my Configuring Windows Azure Services for Windows Server post of 8/1/2012, which also includes a link to Maarten’s article, with additional licensing issues for WAS4WS when used for enterprise (private) clouds.

Full disclosure: I’m a paid contributor to the ACloudyPlace blog.


The Seattle Times (@seattletimes) reported Windows Server 2012 released to manufacturing on 8/1/2012:

imageMicrosoft's big-launch fall seems to be chugging along on schedule with the announcement today that Windows Server 2012 has been released to manufacturing and will be generally available on Sept. 4.

Microsoft had also announced today that Windows 8 was released to manufacturing as well.

imageIn addition, Microsoft announced today that the final build of Visual Studio 2012 has been completed. A developer event scheduled for Sept. 12 in Redmond will include hands-on with Visual Studio 2012.

imageAlso: MSDN subscribers will receive free, one-time, 12-month Windows Store and Windows Phone Developer accounts, which allows developers to create, publish and sell apps, the company said.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

•• Nathan Totten (@ntotten) and Nick Harris (@cloudnick) produced Cloud Cover Episode 85 - Windows Azure, Windows 8, and MVC 4 Demos on 8/3/2012:

Join Nate and Nick each week as they cover Windows Azure. You can follow and interact with the show at @CloudCoverShow.

In this episode Nick and Nate review the samples, blog posts, and tutorials that have been released for Windows Azure since the June release. Nick demos his newest Windows 8 + Windows Azure sample that uses geospatial data and blob storage. Nate shows his modern Cloud Survey application that is built using Ember.js and ASP.NET Web APIs.

In the News:

Samples, Articles, and Content


• Nick Harris (@cloudnick) issued a Windows Azure toolkit for Windows 8 Release Preview v2.0 CodePlex notification on 8/3/2012:

  • imageUpdate all /Sample web tier projects to VS 2012, MVC 4, Web API and Windows Azure Websites deployment model
  • Update all Notification NuGet packages for use on Windows Azure Websites
  • Update all /Sample client apps for WinRT API changes in WinRT
  • Merge RawNotifications sample with Notifications sample
  • Remove VSIX File new project experience

imagePlease download this for Windows Azure Toolkit for Windows 8 functionality on Windows 8 Consumer Preview.

The core features of the toolkit include:

  • Automated Install – Scripted install of all dependencies including Visual Studio 2010 Express and the Windows Azure SDK on Windows 8 Consumer Preview.
  • Project Templates – Windows 8 Metro Style app project templates in Dev 11 in both XAML/C# and HTML5/JS with a supporting C# Windows Azure Project for Visual Studio 2010.
  • NuGet Packages – Throughout the development of the project templates we have extracted the functionality into NuGet Packages for Push Notifications and the Sample ACS scenarios. You can find the packages here and full source in the toolkit under /Libraries.
  • Samples– Five sample applications demonstrating different ways Windows 8 Metro Style apps can use ACS and Push Notifications
  • Documentation – Extensive documentation including install, file new project walkthrough, samples and deployment to Windows Azure.

Nathan Totten (@ntotten) posted Node.js, Socket.IO, and ASP.NET MVC 4 on Windows Azure Web Sites – Tankster Command Sample on 8/2/2012:

imageToday I am releasing another sample application called Tankster Command. This sample shows how to build an application that uses both ASP.NET and Node.js side-by-side. I used ASP.NET MVC 4 to build the bulk of the application and used Node.js and Socket.IO to create a simple chat client. This sample also uses Facebook to authenticate the users.

imageThe sample is open sourced under an Apache 2 license and is available to download on Github. Additionally, you can see a working demo of this application at http://tankstercommand.azurewebsites.net. Simply login with your Facebook account and you can see the sample in action.

Under the Hood

This application is divided into two parts. The first is the ASP.NET MVC application. This part of the app is fairly straight forward. I am using a few controllers, a single Web API controller, and a few views.

The one thing to note about the views is the use of Handlebars for tempting. For both the leaderboard and the chat pages generate portions of their UI on the client after calls to services. In the case of the the leaderboard, I simply call the leaderboard API and return the top 10 players as JSON. After that I use Handlebars to generate HTML from the template and insert it into the DOM. You can see the Handlebars template and the script that runs on page load below.

The second portion of the application runs Socket.IO on Node.js. Socket.IO is useful for building real-time web applications that push data between clients. In this case, I am using Socket.IO to create a simple chat client. The UI of the chat page still uses an ASP.NET controller and view along with Handlebars on the client to generate HTML when a chat message is sent or received.

Below you can see the script that connects to the socket.io server. This script handles sending chat messages as well as receiving messages and announcements.

Caveats when Using Both NuGet and NPM Packages

There are few things you should be aware of when publishing a site to Windows Azure Web Sites that uses both NuGet and Node packages.

First, if you attempt to publish a this site with Git the publishing server will only download NuGet packages. Currently, our publishing servers don’t run NPM when you are publishing a csproj file. This will cause your Socket.io chat server to fail. For now the workaround is to use Web Deploy.

Second, if you are publishing your app using Web Deploy you must include the node_modules folder and all its contents in the csproj. You can do this in Visual Studio 2012 by showing all files in the solution explorer, right clicking on the node_modules folder and clicking include in project.

Conclusion

I hope this samples give you a better idea of how you can mix various technologies on Windows Azure Web Sites. You could do similar things with PHP


Himanshu Singh (@himanshuks) reported Triage.me and 10g.io Take Home Top Honors in Twilio’s Developer Contest to Build The Next Great Twilio App on Windows Azure in an 8/2/2012 post:

imageTwilio just announced the winners of its Developer Contest, where developers built awesome Twilio apps running on Windows Azure. The contest, which was launched in concert with the “Meet Windows Azure” event June 7, awarded the winner up to $60,000 of Windows Azure usage over two years through Microsoft BizSpark Plus. Moreover, both the 1st and 2nd place winners received a generous amount of Twilio credits and a full pass to TwilioCon2012, plus unlimited bragging rights.

The Winners!

Dan Wilson (left) and Mark Olschesky (far left) took top honors with Triage.me, an app that helps people find local medical care by allowing patients to text a local number with their location. The app replies with information about the nearest health care clinic and suggested transportation methods.

A Close Second!

Chris Holloway (right) and Jacob Sherman (far right) came in second place with their stock market app10g.io, which makes it easier to manage and analyze the stock market. The app provides SMS notifications that a user can set up for any major market changes, updates or just keeping up-to-date with their favorite stocks. Click here to watch a video about 10g.io.

Interview with The Winner!

I had the opportunity to chat with Mark Olchesky to learn a bit more about their winning solution, Triage.me. Read on to find out what he had to say.

Himanshu Kumar Singh: Tell me about yourself and how Moxe Health came about.

Mark Olchesky: Moxe Health is a two-person operation based out of Madison, Wisconsin - it’s me (Mark) and Dan. We worked together at Epic, one of the major Healthcare IT and Electronic Medical Record vendors. We met each other while working on the same assignment together at Children’s Medical Center of Dallas. We bonded over troubleshooting enterprise printing errors and routing hospital charges. At the beginning of this year, we decided to begin working together on our own projects.

We were working on a Case Management solution when we went to the Milwaukee Build Health hackathon in March. We didn’t expect much to come out of the event; we went there trying to identify engineers we could recruit. But, the end result of the event was the first prototype for triage.me.

HKS: What problems were you trying to solve with Triage.me?

MO: A representative from one of the hospitals in Milwaukee came to discuss the city’s combined effort from hospitals to help prevent Emergency Department (ED) misuse. ED misuse is one of the largest problems healthcare organizations face today. Nationally, there were 124 million ED visits in 2011, of which the Center for Disease Control & Prevention (CDC) estimates 50% could be handled in a primary care setting. With an average charge for each ED encounter approaching $700 vs. the average physician office charge of $150, every patient redirected away from the ED saves the payer $550. This represents a total potential market savings of $34 billion for facilitating correct use of primary care.

Providing people with the tools they need to find appropriate care everywhere saves everyone, hospitals and patients, money and time. Patients receive a SMS from triage.me post emergency room discharge. By sending a text to triage.me with their problem and current address, we route the person towards the nearest clinic, providing the address and a link to directions (for smartphone users). Many clinics that work with the underinsured also often have variable hours and locations. We've made it easy for these clinics to update their hours and location via SMS so that we can better route triage.me users to locations.

HKS: What solution architecture approaches did you consider?

MO: We knew that text message based communication was central to our design for triage.me. Not everyone owns a smartphone, particularly in the demographics that we aim to serve. I had worked some with Twilio in the past, so it was a great way to quickly provide SMS functionality under the quick dev timelines of a hackathon. Our company was already a BizSpark partner member and I really liked using C#/.NET/MVC 3/SQL Server for healthcare IT applications. Specifically, the integration between MVC3 and SQL Server makes it easy to quickly create forms, scripts and reports customized for our users, both patients and clinical users (nurses, case managers, public health officials)

While at its most basic roots I could have written the base application in Rails or Django, I knew that as with most Healthcare IT applications that triage.me would eventually need to provide more complex reporting or integration with other systems. This is something that is not only supported well by .NET platforms, but it’s something that Hospital CIOs and technical teams are familiar with architecturally. It’s a strong technical solution both architecturally and politically, which is why we ultimately went with our Microsoft stack.

imageHKS: What cloud vendors did you consider and why did you choose Windows Azure?

MO: Since the hackathon, we looked at EC2 and Rackspace in addition to Windows Azure. Right now, Windows Azure’s offer of a BAA with companies like ours is the clear differentiator for Healthcare IT. We can trust Windows Azure with sensitive patient information and to stand with us instead of having to bear the entire burden of problems and fines associated with potential data breaches in the cloud. Cloud hosting has always been more affordable, but not always feasible due to that sensitivity for patient data. Now it’s something that we can engage in, which is helpful to us as we’re starting out as a company.

Of course, the added benefit to us to use Windows Azure is the direct integration with Visual Studio tools. Simple Web Deploy to staging and production environments is a major benefit to productivity.

HKS: Can you provide details of how Triage.me works and how it uses Windows Azure?

MO: The workflow kicks off when a user texts or uses the web form on triage.me to look for care. We ascertain that the person has a low acuity, i.e., they don’t need to go to an emergency room. We then ask the user for their location and their problem (headache, needs a pregnancy test, etc.). This is where the heavier computing happens. We receive the text message inputs back via XML, which we parse and then analyze on a number of attributes that we have stored on Windows Azure SQL Database. At its most basic level, we are analyzing the user’s proximity to the nearest open clinic. At its most complex level, we’re analyzing the person’s problem to route them towards a specific clinic specialty (Pediatrics, psych, OB). This involves a lot of calculations that are occurring in real-time. LINQ to SQL not only makes this easy to code, but also operates blazingly fast. The SMS conversation returns responses back to users within seconds. As a result, within 30 seconds of interaction we’ve provided a specific recommendation of a clinic for a user to go to via SMS, tailored specifically for them.

Triage.me is deployed on a Windows Azure role for staging usage right now. We will be moving our production usage to a Windows Azure role soon.

For clinic staff and professional end users, we use Windows Azure Caching to handle session variables across cloud servers and services. We were initially storing everything on Windows Azure SQL Database so that we could transition our database to a locally hosted SQL Server group, but now that cloud hosting is a more viable option, we’ll look towards using Windows Azure VMs and Windows Azure Tables to handle our database storage as necessary.

HKS: What benefits have you realized with Windows Azure? What are the benefits for your customers and end-users?

MO: There are two major benefits that we’ve found with Windows Azure. The first is the cost. We never want to charge the uninsured for our services, nor do we want to charge community clinics or FQHCs that work with this population. As such, it’s essential to keep our operating costs low. Receiving free hosting from Microsoft as a result of BizSpark and now winning the Twilio-Windows Azure contest has helped us a lot in getting our project off the ground. Into the future, as we expand our services, knowing that we can deploy our solutions through the cloud will keep our costs low.

The second benefit is the flexibility that we’ve been given for deployment options. Our architecture is not the limiting factor in our ability to deploy a solution for our customers. As such, we can spend more of our time focusing on building new tools to help people in need.

HKS: What are your future plans with this solution, and otherwise? How do you plan to leverage the Windows Azure benefits you just won?

MO: Right now, we’re working on securing a formal pilot in a city so that we can provide routing solutions that fit within the community’s public health initiatives. While our tools work well independently, we know that we need to engage hospitals and payers to provide more focused recommendations and to facilitate the patient transition to primary care.

As for the Windows Azure benefits, we’ll be expanding our Windows Azure usage into hosting our production deployments. I’m also looking forward to experimenting with the new VM capabilities for data storage options.

HKS: And lastly, why choose a bee for your logo? : )

MO: Our logo and mascot is a bee because at Build Health where we created triage.me, we were sharing space with a beekeeping convention. Beekeepers were regularly walking in and out of our room. We’ve kept the bee around because we like to think of our routing services like a bee: when it knows where it wants to go, it makes a “bee-line” to its destination! : )

Get started on Windows Azure today - activate your Windows Azure 90-day free trial account.

If you’re new to Twilio, sign up for an account here. New users can grab a cool 1,000 free messages or voice minutes here. Visit Twilio’s Quickstart Tutorials to build with Voice, SMS and Client.

Additional Info:


MarketWire published a THINKstrategies Strategic Thinking Profile: Geminare's myVault Email Archive Powered by Microsoft's Windows Azure news release on 8/2/2012:

PALO ALTO, CA -- (Marketwire) -- 08/02/12 -- THINKstrategies, a leading strategic consulting company, has published a Strategic Thinking Profile that examines Geminare's unique Cloud enablement capabilities and examines how the company is leveraging its Global ISV relationship and strategic alliance with Microsoft by embedding its latest Recovery as a Service solution into Microsoft's Windows Azure platform to create a new offering, called myVault Email Archive powered by Microsoft Windows Azure, concluding that, "The Geminare-Microsoft alliance represents a clear example of a win-win-win-win situation for the two companies, their partners and countless end customers that will benefit from this solution."

The THINKstrategies report addresses the topic of today's rapid growth of Cloud services, acknowledging the "accelerated demand on technology partners, enablers and trusted advisors to remain relevant and to provide value within the evolving Cloud ecosystem," and suggesting that providers looking to offer Cloud services would do well to explore high quality, feature-rich Cloud applications, solutions and enablement capabilities from Geminare, such as myVault Email Archive, that are "quickly becoming the differentiating, value-added solutions that partners must rely on to maintain and win customers."

The report's author, Jeffrey M. Kaplan, Managing Director of THINKstrategies and founder of the Cloud Computing Showplace and Cloud Channel Summit conferences, observes, "Geminare has emerged as a market leader" in Cloud enablement capabilities through its patented Cloud CORE Platform, and that Geminare's ability to capitalize on new market opportunities while at the same time preserving the value of its existing software assets and customer base, is what attracted Microsoft to team with Geminare, resulting in the strategic partnership to launch the myVault Email Archive offering as well as future Cloud-enabled applications.

Geminare's CEO and President, Joshua Geist, said, "Having worked together on a number of key events and educational sessions, I find that Jeff's insights into the Cloud-based RaaS and SaaS markets are outstanding, and we're very pleased to have been recognized by THINKstrategies for both our quality offerings and our increasing global market presence."

Click here to read THINKstrategies' profile and learn more about Geminare's new RaaS solution, powered by Microsoft Windows Azure™.

About myVault Email Archive
imageGeminare's myVault Email Archive powered by Microsoft Windows Azure™ is a groundbreaking, instant-on data and email archiving solution that offers feature-rich archiving and backup capabilities, delivered entirely from the Cloud. With no hardware or software requirements, myVault archiving can turn on in minutes, giving users the ability to locate and restore archived emails instantly and securely from their desktop, handheld device or browser.

myVault is the first of Geminare's Recovery as a Service (RaaS) offerings selected by Microsoft for inclusion in the Windows Azure cloud, with the balance of Geminare's BC/DR portfolio, including Cloud Recovery, Rapid Recovery and Online Backup, scheduled for availability in Windows Azure by year end.

End Users looking to sign up for a free trial of the service offering can do so directly at Microsoft's Windows Azure Marketplace: https://datamarket.azure.com/browse/Applications

Partners can register directly at http://learnmore.launchmyvault.com

About THINKstrategies
THINKstrategies was founded in 2001 and is a leading consulting firm helping IT decision-makers and technology solution providers leverage the value of cloud computing and managed services. Jeffrey M. Kaplan is the Managing Director of THINKstrategies, and the founder of the Cloud Computing Showplace and Cloud Channel Summit conferences, supporting clients in capitalizing on the migration of the technology industry from a product-centric to a services-driven business model.

About Geminare
Geminare enables ISVs to transition their products into cloud-based offerings. Geminare's award-winning, patented Cloud CORE Platform, a proven, mature, multi-tiered service delivery vehicle that is the foundation of Geminare's entire Recovery as a Service (RaaS) data protection suite, has allowed leading and innovative companies such as Microsoft, CA Technologies, OpSource, Arrow, Iron Mountain, CenturyLink, Hosting.com, Bell, Allstream, Ingram Micro, LexisNexis, Long View Systems and many others, to enter the RaaS market with their own suite of data protection cloud offerings. Geminare is headquartered in Palo Alto, CA, with additional operations in Toronto, Canada.

All names referred to are trademarks or registered trademarks of their respective owners.


Jason Zander (@jlzander) announced the Final Build for VS 2012 - Availability and Launch Dates Ahead on 8/1/2012:

imageThe final build of Visual Studio 2012 is now complete! The engineering team is finished and is now preparing the build for our numerous distribution channels.

I’d also like to congratulate the Windows 8 team for completing their important release to manufacturing today. You can read more from the Windows team on the Building Windows 8 blog.

imageI’m looking forward to our next milestone when we make Visual Studio 2012 available for everyone to download from MSDN and elsewhere on August 15th. Watch my blog for the official release information.

Finally, I’d like to invite you to join Soma and me on September 12th as we officially launch Visual Studio 2012 via a live online event. For more information, please visit http://www.visualstudiolaunch.com/. I hope you will tune in to learn more about all the new capabilities in VS 2012.

Exciting times ahead!


Himanshu Singh (@himanshuks) posted Cross Post: Somasegar on Accelerating Startups Globally with Windows Azure on 7/31/2012 (missed when published):

Don't miss Microsoft Corporate Vice President S. Somasegar's latest blog post, "Accelerating Startups Globally with Windows Azure" to learn how the Microsoft Accelerator for Windows Azure is enabling startups that do big things with the cloud to increase their chances of success.

In his post, Soma talks about his experiences participating in the launch of the Microsoft Accelerator for Windows Azure in Israel, China, and India. He discusses why the program was launched in these three countries, and gives his perspective on the unique characteristics of the technology and startup landscape in each. He also talks about the upcoming US-based accelerator program.

imageCheck the post out here.


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

Beth Massi (@bethmassi) posted LightSwitch Community & Content Rollup–July 2012 on 8/2/2012:

imageLast year I started posting a rollup of interesting community happenings, content, samples and extensions popping up around Visual Studio LightSwitch. If you missed those rollups you can check them all out here: LightSwitch Community & Content Rollups.

Visual Studio 2012 Release Date Announced!

Visual Studio developer centerThe big news this month is that we announced the date we’ll be releasing Visual Studio 2012 – August 15! Read all about it on Jason’s blog: Final Build for VS 2012 - Availability and Launch Dates Ahead.

image_thumb1And for a good set of articles and resources on what you can expect to see in LightSwitch in Visual Studio 2012 see: New Features Explained – LightSwitch in Visual Studio 2012

We’re super excited to get the latest version of LightSwitch into your hands!

LightSwitch HTML Client Preview

Last month we announced our roadmap in making it easy to build HTML5-based companion clients for your LightSwitch applications and released the LightSwitch HTML Client Preview for Visual Studio 2012. Since then a lot of folks have downloaded the preview bits and we’ve been having a lot of great conversations and feedback in the HTML Client Preview Forum. If you haven’t done so already, I encourage you to have a look. Here are some more resources to check out:

LightSwitch on Visual Studio Toolbox Show on Channel 9

OData Support and HTML Clients in LightSwitchLast week while I was up in Redmond I met up with Robert Green, the host of the Visual Studio Toolbox show on Channel 9, to show off some of the new features in LightSwitch in Visual Studio 2012 like OData support, the new Cosmo shell, row level security, and more. I also show off some of the HTML client preview bits that were released last month in order to demonstrate building touch-centric mobile companion apps.

Watch: Visual Studio Toolbox- OData Support and HTML Clients in LightSwitch

More LightSwitch E-Books

In July we had a couple more e-books published. What’s awesome about these is that they include explanations of some of the new features in Visual Studio 2012 and best of all they’re FREE! Many thanks to the authors and publishers for these great resources.

LightSwitch Succinctly by Jan Van der Haegen - The author of this e-book, Jan Van der Haegen, is a self-described green geek who writes a monthly LightSwitch column for MSDN magazine. In LightSwitch Succinctly, he provides a quick tour of the different parts of the LightSwitch development environment so that you can judge whether Visual Studio LightSwitch would be an ideal tool to add to your belt.

Your Data Everywhere: Consuming LightSwitch's OData Services from Windows Phone Apps by Alessandro Del Sole - Alessandro Del Sole, author of Microsoft Visual Studio LightSwitch Unleashed, concludes his three-part series on the usefulness of the new support for Open Data Protocol in the latest version of LightSwitch.

Read: Part 1, Part 2, Part 3

Community Events & Conferences

Check out the list of upcoming events and sessions by the LightSwitch team. This is what our agenda looks like so far, but it’s only mid summer and we’ll probably have more to slip in as other plans materialize. For session details see my blog post from earlier this week: LightSwitch Sessions at a Variety of Upcoming Events

MSDN Magazine Column: Leading LightSwitch

August 2012Jan van der Haegen continues his journey into the depths of LightSwitch with his regular column in the July issue:

Leading LightSwitch: Tales of Advanced LightSwitch Client Customizations
Enjoy these tales of creating custom applications that show off the versatility and ease of use LightSwitch offers. You will also get a glimpse of how a real pro works with clients.

More Notable Content this Month

Extensions (see all 92 of them here!):

Samples (see all 86 of them here):

Team Articles:

Community Articles:

LightSwitch Team Community Sites

Become a fan of Visual Studio LightSwitch on Facebook. Have fun and interact with us on our wall. Check out the cool stories and resources. Here are some other places you can find the LightSwitch team:
LightSwitch MSDN Forums
LightSwitch Developer Center
LightSwitch Team Blog
LightSwitch on Twitter (@VSLightSwitch, #VisualStudio #LightSwitch)

No significant articles today.


Return to section navigation list>

Windows Azure Infrastructure and DevOps

• Eric Ligman posted Another large collection of Free Microsoft eBooks and Resource Kits for you, including: SharePoint 2013, Office 2013, Office 365, Duet 2.0, Azure, Cloud, Windows Phone, Lync, Dynamics CRM, and more on 7/30/2012 (missed when published):

imageLast week, I put up my Large Collection of Free Microsoft eBooks post (60+ eBooks) here on the blog and the response and feedback I am receiving about it is incredibly positive about how much you liked it. Because of this, I thought I would put up this follow-up post which includes even more free Microsoft eBooks available to you for download. Just like with the last list I published here for you, if you find this list helpful, please share it with your peers and colleagues so that they too can benefit from these resources.

image image image image
Developing an Advanced Windows Phone 7.5 App that Connects to the Cloud Developing Applications for the Cloud, 2nd Edition Building Hybrid Applications in the Cloud on Windows Azure Building Elastic and Resilient Cloud Applications - Developer's Guide to the Enterprise Library 5.0 Integration Pack for Windows Azure
PDF
EPUB
MOBI

imageHere are Azure-related links from Eric’s earlier post:

Moving Applications to the Cloud, 2nd Edition
PDF
EPUB
MOBI
Windows Azure Prescriptive Guidance
PDF
EPUB
MOBI
Windows Azure Service Bus Reference
PDF
EPUB
MOBI
Deploying an ASP.NET Web Application to a Hosting Provider using Visual Studio
PDF
EPUB
MOBI

Can’t beat the price!


• My (@rogerjenn) Uptime Report for my Live OakLeaf Systems Azure Table Services Sample Project: July 2012 = 100% begins:

imageMy live OakLeaf Systems Azure Table Services Sample Project demo runs two small Windows Azure Web role instances from Microsoft’s South Central US (San Antonio, TX) data center. This report now contains more than a full year of uptime data.

I didn’t receive Pingdom’s Monthly Report for July expected on 8/3/2012.

Here’s the detailed uptime report from Pingdom.com for July 2012:

image


Following is detailed Pingdom response time data for the month of July 2012:

image


This is the fourteenth uptime report for the two-Web role version of the sample project since it was upgraded to two instances. Reports will continue on a monthly basis.

image


David Linthicum (@DavidLinthicum) asserted “Believe it -- personal clouds exist, offering much-needed options for users who sync data between more than one device” in a deck for his Get ready for the personal cloud post of 8/3/2012 to InfoWorld’s Cloud Computing blog:

imageI hear a lot about personal clouds these days -- so much so that I figured it was a good idea to talk about what they are and the value they may bring.

A few types of personal clouds are gaining traction. One type covers the exclusive use for an individual, not a business; I'll call these "public/personal clouds." Another type covers clouds that exist in a home or a business and are managed by an individual; I'll call these "private/personal clouds."

imageGartner vouches for public/personal clouds: "Consumers spend over $2 trillion a year on content, devices and services, and the emergence of personal clouds reflects their desire to access content on any device without complications or restrictions."

Public/personal clouds refer to consumer-oriented cloud services, such as Box.net, Dropbox, iCloud, and Evernote, targeted at individual users. They typically provide simple services, such as file, picture, notes, and content sharing, between devices. Most users who sync files between two or more devices and computers find huge value in these services -- I know I do.

Private/personal clouds are more complex and still in development. You can see and touch these devices, and they may provide some type of cloud service. For instance, many NAS devices offer personal cloud options and double as unique personal storage-as-a-service providers. In turn, you can access files remotely using a secured connection back to your NAS over the Internet.

Do personal clouds exist? You bet they do -- and they'll become a larger part of our personal lives over the course of time.

Moving forward, we'll purchase, use, and toss hundreds of devices and computers. However, our personal clouds will helps us transcend the changing technology.


• Mike Neil described the Root Cause Analysis for recent Windows Azure Service Interruption in Western Europe in an 8/2/2012 post:

imageOn July 26 Windows Azure’s compute service hosted within one cluster in our West Europe sub-region experienced external connectivity loss to the Internet and other parts of Windows Azure. There was no impact to other regions or services throughout the duration of the interruption. The incident began at 11:09AM GMT and lasted for 2 hours and 24 minutes. Below is a more detailed analysis of the service disruption and its resolution.

Windows Azure’s network infrastructure uses a safety valve mechanism to protect against potential cascading networking failures by limiting the scope of connections that can be accepted by our datacenter network hardware devices. Prior to this incident, we added new capacity to the West Europe sub-region in response to increased demand. However, the limit in corresponding devices was not adjusted during the validation process to match this new capacity. Because of a rapid increase in usage in this cluster, the threshold was exceeded, resulting in a sizeable amount of network management messages. The increased management traffic in turn, triggered bugs in some of the cluster’s hardware devices, causing these to reach 100% CPU utilization impacting data traffic.

We resolved the issue by increasing limit settings in the affected cluster. We also increased the limit settings and improved automated validation across all Windows Azure datacenters. Additionally, we are applying fixes for the identified bugs to the device software. We have also improved our network monitoring systems to detect and mitigate connectivity issues before they affect running services. We sincerely apologize for the impact and inconvenience this caused our customers.


David Linthicum (@DavidLinthicum) asserted “Businesses that want to double as cloud computing providers must take three fundamental measures” in a deck for his 3 first steps in building your own cloud services article of 8/2/2012 for InforWorld’s Cloud Computing blog:

imageThese days, I'm running into more than a few innovative enterprises looking to stand up their own cloud computing services or APIs for consumption outside of the business. In essence, enterprises are becoming cloud computing providers.

imageEnterprises are standing up cloud services in support of new business opportunities, such as better supply chain integration, better customer service, or even the ability to charge a subscription fee for access to bits and pieces of their existing information systems that may be of value to outside users. In doing so, they may also gain a client list that includes partners, customers, or even unknown users leveraging these services for a fee.

Whatever the business reason, an enterprise has a few initial steps to consider before launching a commercial cloud:

Step 1: Determine the purpose of the cloud service, and map out basic use cases. This may seem like common sense, but many enterprises move forward without a good plan or foundational design. Remember: You're taking on the same responsibilities as the larger public computing providers; thus, splurge on design and planning cycles for your initial projects.

Step 2: Determine what information will be externalized, including where the data exists, how you'll get to it, and any security or governance issues. This means understanding the physical location of the data, the metadata, and the proper integration path from the source systems to those hosting the cloud service.

Step 3: Create an API/service management strategy, such as selecting the best path for externalization and management. This typically means the mechanisms for exposing the services, including what technology will be in place. A number of providers offer API management technology, both as software and out of the cloud. However, the more important issue to consider is how the services will be managed during production, including validating user access and guarding against service saturation. Service governance technology is available to address these questions.

Of course, there is a lot more to the process, depending on your ultimate goal. However, if you begin with these fundamental steps you'll find that you're well on your way.

imageA propos enterprise application of Windows Azure Services for Windows Server (WAS4WS).


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

image_thumb2No significant articles today.


<Return to section navigation list>

Cloud Security and Governance

Download The Microsoft Approach to Cloud Transparency white paper (PDF) of 6/25/2012 (missed when published):

Overview

image_thumbThis paper provides an overview of various risk, governance, and information security frameworks and standards. It also introduces the cloud-specific framework of the Cloud Security Alliance (CSA), known as the Security, Trust & Assurance Registry (STAR).


<Return to section navigation list>

Cloud Computing Events

David Gristwood (@ScroffTheBad) announced Microsoft Media Platform Summit, 5th September 2012, Amsterdam in an 8/2/2012 post:

image

imageVideo delivery is consuming an ever larger percentage internet traffic, from user generated content to streaming movies, end user consumption appears to have no bounds. But delivering high-quality video to multiple devices, monetizing it and protecting your investments in producing it is not easy. Come and hear how Microsoft’s technologies can help you prepare, stream and protect content to multiple devices and how you can develop rich media applications for Windows 8.

Summit Goals

imageThe Media Summit will be delivered by our Redmond based Media Experts and Product team members to help you understand three things:

  1. Microsoft’s end-end vision for media delivery, client technologies, streaming protocols and media partner eco-system.
  2. How to produce engaging media applications on Windows 8.
  3. How Microsoft’s cloud based media platform allows you to stream live and on demand content to Windows, iOS and Android. [Emphasis added.]

Your team will leave the summit with an understanding of how to develop a comprehensive video delivery strategy that will meet your needs both today and well into the future.

Who Should Attend?

If you influence the online media strategy for your company in either a business or technical role, and are keen to understand how Microsoft technologies can help you simplify this process then you should attend this session.

This is an invite only event - if you would like to attend, please email ukdev@microsoft.com to register your interest.

<Return to section navigation list>

Other Cloud Computing Platforms and Services

Kevin McLaughlin (@kmclaughlin69) posted VMware's 'Project Zephyr' Challenges Amazon, Microsoft In Public Cloud Battle to CRN’s Cloud blog on 7/1/2012 (missed when published):

imageVMware is planning to launch a public cloud infrastructure-as-a-service initiative, code named Project Zephyr, that will catapult the virtualization kingpin into one of the industry's hottest markets, CRN has learned.

imageAccording to sources familiar with VMware's plans, VMware has purchased a large amount of data center space in Nevada for Project Zephyr, an initiative aimed at showcasing its cloud software stack. Project Zephyr runs the vCenter Operations Management Suite, vCloud Director and Site Recovery Manager for failover and disaster recovery, along with with EMC (NYSE:EMC) storage gear and Cisco (NSDQ:CSCO)'s Unified Computing System (UCS) as the computing platform.

imageProject Zephyr isn't just for show, however: VMware is planning to use it to offer a public cloud infrastructure-as-a-service that will compete with cloud services from Amazon (NSDQ:AMZN), Microsoft (NSDQ:MSFT) and other players in this segment.

Though similar in some ways to VCE -- the converged infrastructure joint venture between VMware, EMC and Cisco -- Project Zephyr is fully controlled by VMware and runs on data center space that it owns, sources told CRN.

"VMware basically purchased an entire data center; they have a lot of metal in a pretty massive building," said one source with knowledge of VMware's plans, who requested anonymity. "It's like a big neon sign saying, here are the benefits if you go with VMware end-to-end."

Project Zephyr is VMware's way of lighting a fire under its vCloud service provider partners, which have been slow to build out the infrastructure and business model for cloud services, sources told CRN. Dell (NSDQ:Dell), AT&T (NYSE:T), Bluelock and CSC are members of VMware's vCloud program in North America.

"VMware is doing this because none of its service provider partners are moving fast enough. Look at the adoption rate of vCloud Director with service providers -- it is non-existent," said the source, who requested anonymity.

VMware has been working on Project Zephyr since last year's VMworld conference, and the company may use this year's event to unveil the program and share more information about what it entails, sources told CRN.

A VMware spokesperson reached by CRN on Wednesday declined to comment on Project Zephyr, citing the company's policy of not responding to rumors or speculation. …

Read more: NEXT: How Project Zephyr Could Impact VMware's Channel
1 | 2 | 3 | Next >>


•• Janakiram MSV (@janakiramm) asked Is Cloud Foundry on its way to become the de facto PaaS standard of the Industry? in an 8/3/2012 post to the CloudStory.in blog:

imageI have been following Cloud Foundry from the day it got announced. It was very clear that VMware invested in it with a clear strategy – democratize PaaS by making it is absolutely easy for the hosters and enterprises to deploy it. Between 2008 and 2011, PaaS was associated with Google (App Engine), Microsoft (Windows Azure), Salesfore.com (Force.com / Heroku) and Engine Yard. But in the last one year, there are half-a-dozen new players that entered the niche PaaS market. And, one thing that is common among these new entrants is that all of them are powered by Cloud Foundry. Whether it is ActiveState, AppFog, Tier 3, Uhuru Software, PaaS.io or VMware’s own CloudFoundry.com, all of them use the same set of APIs and tools based on Cloud Foundry.

imageArchitecting PaaS is not a trivial task! Microsoft and Google put some of the best brains behind Windows Azure and App Engine. An average hoster or even a mature Cloud service provider cannot match the reliable PaaS architecture that Windows Azure or App Engine offer. But by adopting Cloud Foundry, any hoster can claim to be a PaaS player. They can offer popular languages, runtimes, frameworks and services without dealing with the complexity of packaging them for the Cloud. This will commoditize PaaS by empowering many service providers to turn into a PaaS provider overnight!

imageThe other important factor is the emerging Private PaaS paradigm within the enterprises. As public facing line-of-business and web applications find their way to the Public Cloud, enterprises are looking at a Private PaaS layer that they can target to deploy internal applications. If enterprises can find the same PaaS that powers their Private Cloud and the Public Cloud, they can standardize their deployment environments across the organization. This gives them a huge productivity boost along with cost efficiency. By adopting Cloud Foundry as the deployment environment to run the internal LOB applications, enterprises can get the same abilities of the Public PaaS within their environment. Because Cloud Foundry can be run on any Public IaaS Cloud, it is possible to move Cloud applications across the Private and Public Clouds seamlessly. Given the fact that VMware has a lead in the enterprise market through their vSphere adoption, it is a matter of time before they tightly integrate vFabric with Cloud Foundry to offer a solid Private PaaS to their customers.

imageIn the last few weeks, three major announcements reinforced this idea of Cloud Foundry becoming an industry standard for PaaS. AppFog has gone into the GA mode and they give the developers a choice to deploy their apps on multiple Clouds including AWS, Windows Azure, Rackspace and HP Cloud. Uhuru has announced a revamped beta in the form of Uhuru AppCloud. Finally, ActiveState Stackato has entered 2.0 and announced support for .NET.

AppFog has taken an interesting route to offer PaaS across multiple Clouds. By exposing the standard Cloud Foundry APIs that map into the underlying Public Cloud, it enables developers to use the standard APIs and tools to deal with their applications without the need to learn new APIs. The most interesting use of this scenario is seen in its integration with Windows Azure. Microsoft has revamped the Windows Azure platform and the supporting APIs. AppFog built a virtual Cloud Foundry API that translates the REST API into Windows Azure’s new REST API. They also sync the account credentials so that the developers need not even signup with Windows Azure separately. It is also possible to clone applications to Windows Azure that are deployed in AWS, Rackspace or HP Cloud. This is a very compelling scenario for developers. I deployed a WordPress website through AppFog and I really liked the simplicity. There are only 3 steps that I dealt with – 1) Choosing an Application, 2) Choosing the target Cloud and, 3) Choosing the unique subdomain name.

Last week Uhuru has launched their new beta of the AppCloud platform based on Cloud Foundry. I tried deploying an app through their new portal and found it to be simple. Uhuru is one of the few PaaS providers to bring .NET capabilities to the Cloud Foundry environment.

Finally, ActiveState Stackato entered 2.0 with a set of new features including the support for .NET. ActiveState and Tier 3 collaborated to add the .NET support to Stackato.

With VMware investing in Cloud Foundry and the ecosystem extending it, Cloud Foundry is turning out to be a viable PaaS for the businesses.

If Cloud Foundry is becoming a “PaaS standard”, it’s only a “standard” among third- or lower-tier PaaS providers. You won’t see the major cloud players - Amazon, Microsoft, and Google - or the second tier - HP, IBM, Rackspace, Dell, et al. - adopting it.


Steven O’Grady (@sogrady) analyzed IaaS Pricing Patterns and Trends in a 8/2/2012 post to his Redmonk blog:

imageAs cloud adoption accelerates, one of the more important questions facing users becomes pricing. Because cloud pricing models differ fundamentally from those that preceded them, a full understanding of the economics of the cloud has lagged. While cloud pricing is very accessible, it can be difficult to accurately anticipate costs or project longer term operating expenses. By making it frictionless to consume cloud based assets, cloud vendors have ensured high rates of consumption.

imageThe uncertainty around ongoing operating costs is an area that vendors like Newvem and Rightscale (via their ShopForCloud acquisition) are targeting. But given the lack of generally available information comparing pricing for Infrastructure-as-a-Service options, we’ve collected data on standard available rates and performed a few basic analyses to assist with cost projections. A link to the aggregated dataset is provided below, both for fact checking and to enable others to perform their own analyses, expand the scope of surveyed providers or both.

Before we continue, a few notes.

Assumptions

  • No special pricing programs (beta, etc)
  • Linux operating system, no OS premium
  • Charts are based on price per hour costs (i.e. no reserved instances)
  • Standard packages only considered (i.e. no high memory, etc)
  • Where not otherwise specified, the number of virtual cores is assumed to equal available compute units

Objections & Responses

  • This isn’t an apples to apples comparison“: This is true. The providers do not make that possible.
  • These are list prices – many customers don’t pay list prices“: This is also true. Many customers do, however. But in general, take this for what it’s worth as an evaluation of posted list prices.
  • This does not take bandwidth and other costs into account“: Correct, this analysis is server only – no bandwidth or storage costs are included. Those will be examined in a future update.
  • This survey doesn’t include [provider X]“: The link to the dataset is below. You are encouraged to fork it.

With the above in mind, one last general statement. This analysis is intended to explore in more detail differentiations in pricing characteristics from provider to provider. Because none of the providers offer exactly the same packages, because most providers have other pricing and product packages available, and because list prices can be negotiated, this research is not intended to serve as a hard basis for cost planning. It is, rather, an attempt to discern where vendors are being aggressive with pricing, and how they differ from one another in these areas.

Specifically what we’re looking for in the charts is slope. The steeper the slope, the more quickly functionality is going up relative to price, or in layman’s terms, the more you get for your money.

With that, here is the cost of disk space relative to the price per hour.

In general, disk space is cheap in the cloud. According to our numbers, the average cost for one gigabyte of disk space in the cloud per month for all providers is $0.73. IBM had the most expensive average cost at $1.47, with Google the cheapest at $0.25. Our graph confirms this, with Google narrowly edging out Amazon for the steepest trajectory, and therefore the most aggressive pricing for disk space. The close mirroring of the Amazon and Google results, not to mention their separation from the other providers, also suggests that Google’s clear target with the Google Compute Engine is Amazon.

Next, we have a chart examining memory relative to hourly costs.

Here, we see far more pricing consistency amongst the providers. Amazon, Google, Joyent, HP and Microsoft are all tightly grouped in their memory pricing, with Rackspace showing variation and IBM’s cloud commanding a premium on a per gigabyte of memory basis. For those wondering why Joyent’s trajectory extends further, it’s because their standard offering list includes the highest available RAM package at 64 GB.

Lastly, we have an examination of computing units. As mentioned above, because some providers differentiate between cores and computing units, we have approximated the available computing units per provider package as accurately as possible by either using the appropriate computing unit multiple or assuming that the number of cores correctly reflects the total available computing unit count. Corrections to this data are welcome.

In general, this chart again illustrates the centrality of Amazon as a target for other providers. Microsoft’s price per computing unit very closely mirrors Amazon’s, while Google’s pricing is even more aggressive, presumably as part of their recruitment strategy. Joyent, meanwhile, is the least aggressive of the providers when it comes to computing unit pricing, which likely is a function of their automatic CPU bursting feature. HP, IBM and Rackspace, meanwhile, are neither aggressive nor conservative with their cost per computing unit. This may be a function of their respective audiences, which tend to be more enterprise-focused and therefore somewhat less price sensitive.

Overall, the data suggests obvious areas of opportunity to exploit cloud services as well as areas to focus on from a negotiation standpoint. Disk is predictably cheap, and overall server related storage should not be an issue for cloud customers. Memory is comparatively expensive, at an average of $42.88 per GB per month for all providers, although the skew resistant median is a more modest $32.06. Computing units, meanwhile, are more expensive yet at an average cost of $86.23 / mo (median $63.27/mo). These costs in particular are likely to continue falling, as competitive pressure ramps up with the emergence of more public (and private) cloud providers.

It is also interesting, as noted, to observe the degree to which pricing strategies are clearly built around, or in relation to, Amazon. The data also suggests that exceptions to this pattern are generally enterprise market providers, most obviously IBM, which tend to target customers who historically have had different sensitivities to cost. It will be interesting to see if the public providers like Amazon and those clearly attempting to compete with them on a cost basis, like Google, have a longer term impact on more enterprise-oriented cloud providers with respect to price.

At any rate, the dataset the above research is based on is available here. As mentioned above, you are encouraged to fork it if you wish to expand on it, or notify me of errors/inaccuracies. If spreadsheets offered a pull request mechanism similar to source code repositories, I would encourage that route. In its absence, forked copies will have to suffice.

Additionally, for those who wish to view non-aggregated plots of the above data, I have faceted copies available here: disk, memory and CU.

I look forward to seeing what you all make of this quick look at IaaS pricing data, and what patterns you extract from it.


Jeff Barr (@jeffbarr) described Fast Forward - Provisioned IOPS for EBS Volumes in an 8/1/2012 post to the Amazon Web Services blog:

The I/O Imperative
imageAs I noted earlier this month, modern web and mobile applications are highly I/O dependent, storing and retrieving lots of data in order to deliver a rich and personalized experience.

In order to give you the I/O performance and the flexibility that you need to build these types of applications, we've released several new offerings in the last few months:

  • imageFor seamless, managed, scaling of NoSQL workloads, we recently introduced DynamoDB, an SSD-backed NoSQL database with read and write times measured in single-digit milliseconds, with very modest variance from request to request. DynamoDB makes it easy to scale up from 20 to 200,000 or more reads and writes per second and to scale back down again based on your application's requirements. The response to DynamoDB has been incredible, and it has become (as Werner noted) our fastest-growing service.
  • Two weeks ago, we launched the first member of the EC2 High I/O family, the h1.4xlarge instance type, to support time series analysis, NoSQL databases, and mobile and streaming applications requiring low latency access to storage systems delivering tens of thousands of IOPS. The h1.4xlarge comes with 2 TB of SSD-backed storage on each instance.

We know that you want more options for your I/O intensive applications, and we're happy to oblige.

Here You Go

What Are IOPS?

As you may know, the performance of a block storage device is commonly measured and quoted in a unit called IOPS, short for Input/Output Operations Per Second.

To put the numbers in this post into perspective, a disk drive spinning at 7,200 RPM can perform at 75 to 100 IOPS whereas a drive spinning at 15,000 RPM will deliver 175 to 210. The exact number will depend on a number of factors including the access pattern (random or sequential) and the amount of data transferred per read or write operation. We are focusing on improving the performance and consistency of database-backed applications that run on AWS by adding new EBS and EC2 options.

Here's what we are announcing today:

  1. A new type of EBS volume called Provisioned IOPS that gives you the ability to dial in the level of performance that you need (currently up to 1,000 IOPS per volume, with more coming soon) . You can stripe (RAID 0) two or more volumes together in order to reach multiple thousands of IOPS.
  2. The ability to launch EBS-Optimized instances which feature dedicated throughput between these instances and EBS volumes.

EBS Provisioned IOPS
We released EBS in the summer of 2008. Since that time, our customers have very successfully used EBS to store the persistent data associated with their EC2 instances. We have found that there are certain workloads that require highly consistent IOPS, and others that require more IOPS on an absolute basis. Relational databases certainly qualify on both counts.

As a point of reference, a standard EBS volume will generally provide about 100 IOPS on average, with the ability to burst to hundreds of IOPS on a best-effort basis. Standard EBS volumes are great for applications with moderate or bursty I/O requirements as well as for boot volumes.

The new Provisioned IOPS EBS volume allows you to set the level of throughput that you need, and currently supports up to 1,000 IOPS (for 16K), with higher limits coming soon. For even higher performance, you can stripe multiple Provisioned IOPS volumes together, giving you the ability to deliver thousands of IOPS per logical volume to your EC2-powered application. These volumes deliver consistent performance and are well-suited to database storage, transaction processing, and other heavy random I/O loads. When attached to EBS-Optimized instances, these volumes are designed to deliver within 10% of their provisioned I/O performance 99.9% of the time.

You can create Provisioned IOPS EBS volumes from the AWS Management Console, the command line tools, or via the EC2 APIs. If you use the console, you need only select the Provisioned IOPS volume type and then enter the desired number of IOPS:

Provisioned IOPS volumes are priced at $0.125 per GB of allocated storage per month plus $0.10 per provisioned IOPS per month in US East (Northern Virginia); see the EBS page for more info. By default, each AWS account can create up to 20 TB of Provisioned IOPS volumes with a total of 10,000 IOPS. If you need more of either (or both), simply fill out this form.

You can create a provisioned equivalent of your existing EBS volume by suspending all I/O to your volume, creating a snapshot, and then creating a Provisioned IOPS volume using the snapshot as a starting point.

EBS-Optimized EC2 Instances
For maximum performance and to fully utilize the IOPS provisioned on an EBS volume, you can now request the launch of EBS Optimized EC2 instances. An EBS-Optimized instance is provisioned with dedicated throughput to EBS. The m1.large, m1.xlarge, and m2.4xlarge instance types are currently available as EBS-Optimized instances. m1.large instances can transfer data to and from EBS at a rate of 500 Mbit/second; m1.xlarge and m2.4xlarge instances can transfer data at a rate of 1000 Mbit/second. This is additional throughput, and doesn't affect other general purpose network throughput already available on the instance.

There is an additional hourly charge for the EBS-Optimized instances: $0.025 for the m1.large and $0.05 for the m1.xlarge and m2.4xlarge instance types.

You can upgrade your EC2 instances to EBS-Optimized instances as follows:

  1. Shut down any applications that are running on the instance.
  2. Stop the instance.
  3. Modify the instance using the ec2-modify-instance-attribute command) and set the EBS-Optimized flag. Change the instance type to one of the supported instance types if necessary.
  4. Start the instances

And Here's Arun
I spoke with Arun Sundaram, a Product Manager on the AWS Storage team, to learn more about these two features. Here's what he had to say:

And That's That
These new features are available for you to use today. Give them a whirl, and let me know what you think!


James Hamilton analyzed EBS Provisioned IOPS & Optimized Instance Types in an 8/1/2012 post:

imageIn I/O Performance (no longer) Sucks in the Cloud, I said:

Many workloads have high I/O rate data stores at the core. The success of the entire application is dependent upon a few servers running MySQL, Oracle, SQL Server, MongoDB, Cassandra, or some other central database.

imageLast week a new Amazon Elastic Compute Cloud (EC2) instance type based upon SSDs was announced that delivers 120k reads per second and 10k to 85k writes per second. This instance type with direct attached SSDs is an incredible I/O machine ideal for database workloads, but most database workloads run on virtual storage today. The administrative and operational advantages of virtual storage are many. You can allocate more storage with a call of an API. Blocks are redundantly stored on multiple servers. It’s easy to checkpoint to S3. Server failures don’t impact storage availability.

The AWS virtual block storage solution is the Elastic Block Store (EBS). Earlier today two key features were released to support high performance databases and other random I/O intensive workloads on EBS. The key observation is that these random I/O-intensive workloads need to have IOPS available whenever they are needed. When a database runs slowly, the entire application runs poorly. Best effort is not enough and competing for resources with other workloads doesn’t work. When high I/O rates are needed, they are needed immediately and must be there reliably.

Perhaps the best way to understand the two new features is to look at how demanding database workloads are often hosted on-premise. Typically large servers are used so the memory and CPU resources are available when needed. Because a high performance storage system is needed and because it is important to be able to scale the storage capacity and I/O rates during the life of the application, direct attached disk isn’t the common choice. Most enterprise customers put these workloads on Storage Area Network devices which are typically connected to the server by a Fiber Channel network (a private communication channel used only for storage).

The aim of the announcement today is to take some of what has been learned from 30+ years of on-premise storage evolution. Customers want virtualized storage but, at the same time, they need the ability to reserve resources for demanding workloads. In this announcement, we take some of the best aspects what has emerged in on-premise storage solutions and give EC2 customers the ability to scale high-performance storage as needed, reserve and scale the available I/Os per Second (IOPS) as needed, and reserve dedicated network bandwidth to the storage device. The latter is perhaps the most important and the combination allows workloads to reserve both the IOPS rates at the storage as well as the network channel to get to the storage and be assured it will be there when they need it.

The storage, IOPS, and network capacity is there even if you haven’t used it recently. It’s there even if your neighbors are also busy using their respective reservations. It’s even there if you are running full networking traffic load to the EC2 instance. Just as when an on-premise customer allocates a SAN volume with a Fiber Channel attach that doesn’t compete with other network traffic, allocated resources stay reserved and they stay available. Let’s look at the two features that deliver a low-jitter, virtual SAN solution in AWS.

Provisioned IOPS is a feature of Amazon Elastic Block Store. EBS has always allowed customers to allocate storage volumes of the size they need and to attach these virtual volumes to their EC2 instances. Provisioned IOPS allows customers to declare the I/O rate they need the volumes to be able to deliver, up to 1,000 I/Os per second (IOPS) per volume. Volumes can be striped together to achieve reliable, low-latency virtual volumes of 20,000 IOPS or more. The ability to reliably configure and reserve over 10,000 IOPS means the vast majority of database workloads can be supported. And, in the near future, this limit will be raised allowing t increasingly demanding workloads to be hosted on EC2 using EBS.

EBS-Optimized EC2 instances are a feature of EC2 that is the virtual equivalent of installing a dedicated network channel to storage. Depending upon the instance type, 500 Mbps up to a full 1Gbps are allocated and dedicated for storage use only. This storage communications channel is in addition to the network connection to the instance. Storage and network traffic no longer compete and, on large instance types, you can drive full 1Gbps line rate network traffic while, at the same time, also be consuming 1Gbps to storage. Essentially EBS Optimized instances have a dedicated storage channel that doesn’t compete with instance network traffic.

From the EBS detail page:

EBS standard volumes offer cost effective storage for applications with light or bursty I/O requirements. Standard volumes deliver approximately 100 IOPS on average with a best effort ability to burst to hundreds of IOPS. Standard volumes are also well suited for use as boot volumes, where the burst capability provides fast instance start-up times.

Provisioned IOPS volumes are designed to deliver predictable, high performance for I/O intensive workloads such as databases. With Provisioned IOPS, you specify an IOPS rate when creating a volume, and then Amazon EBS provisions that rate for the lifetime of the volume. Amazon EBS currently supports up to 1,000 IOPS per Provisioned IOPS volume, with higher limits coming soon. You can stripe multiple volumes together to deliver thousands of IOPS per Amazon EC2 instance to your application.

To enable your Amazon EC2 instances to fully utilize the IOPS provisioned on an EBS volume, you can launch selected Amazon EC2 instance types as “EBS-Optimized” instances. EBS-optimized instances deliver dedicated throughput between Amazon EC2 and Amazon EBS, with options between 500 Mbps and 1000 Mbps depending on the instance type used. When attached to EBS-Optimized instances, Provisioned IOPS volumes are designed to deliver within 10% of the provisioned IOPS performance 99.9% of the time. See Amazon EC2 Instance Types to find out more about instance types that can be launched as EBS-Optimized instances.

Providing scalable block storage at-scale, in 8 regions around the world is one of the most interesting combinations of distributed systems and storage problems we face. The problem has been well solved in high-cost on-premise solutions. We now get to apply what has been learned over the last 30+ years to solve the problem at cloud-scale with low-cost and 100s of thousands of concurrent customers. An incredible number of EC2 customers depend upon EBS for their virtual storage needs, the number is growing daily, and we are really only just getting started. If you want to be part of the engineering effort to make Elastic Block Store the virtual storage solution for the cloud, send us a note at ebs-jobs@amazon.com.

With the announcement today, EC2 customers now have access to two very high performance storage solutions. The first solution is the EC2 High I/O Instance type announced last week which delivers a direct attached, SSD-powered 100k IOIPS for $3.10/hour. In today’s announcement this direct attached storage solution is joined by a high-performance virtual storage solution. This new type of EBS storage allows the creation of striped storage volumes that can reliably delivery 10,000 to 20,000 IOPS across a dedicated virtual storage network.

Amazon EC2 customers now have both high-performance, direct attached storage and high-performance virtual storage with a dedicated virtual storage connection.


<Return to section navigation list>

0 comments: