Sunday, July 17, 2011

Windows Azure and Cloud Computing Posts for 7/16/2011+

image222A compendium of Windows Azure, SQL Azure Database, AppFabric, Windows Azure Platform Appliance and other cloud-computing articles.

image433

•• Update 7/17/2011 1:00 PM PDT: Added more articles marked by Bruce Kyle, Alex Feinberg, Adam Hall and Me.

• Update 7/16/2011 1:30 PM PDT: Added a few more articles marked .

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

imageNo significant articles today.


<Return to section navigation list>

SQL Azure Database and Reporting

Beth Stackpole (@bethstack, pictured below) asserted As Microsoft’s SQL Azure gains altitude, industry eyes chances of reign in a 7/15/2011 post to the SearchSQLServer.com blog:

image When longtime SQL Server MVP Paul Nielsen decided to launch a startup to deliver a hosted customer relationship management application designed for the nonprofit industry, there was no doubt the traditional version of the Microsoft database would serve as the core platform. Yet months into the development effort, Nielsen changed course.

imageAfter attending a February 2010 MVP summit, where Microsoft laid out the latest roadmap for its SQL Azure cloud database service, Nielsen felt comfortable that the cloud was ready for prime time. As a result, Nielsen rewrote his business plan around SQL Azure, seizing what he said was a real opportunity to greatly reduce his startup and operating costs.

image “After seeing the roadmap and some of the growth features, it made it much more feasible than when SQL Azure first came out,” said Nielsen, author of the SQL Server Bible series. “My wife likes the new business plan that doesn’t have us spending $30,000 on hardware that will be obsolete in two years. Now we can say we have high availability on three servers without buying three servers and without me spending my energy configuring servers, firewalls and security. Whatever you can do to offload some of the human energy cost is a good thing.”

While Nielsen and many other independent software vendors (ISVs) may be ready to cast their lot with SQL Azure, enterprise customers are proving to be a little more hesitant.

SQL Azure, based on SQL Server technologies and rolled out last January, is the database service piece of Microsoft’s Azure cloud-based portfolio. Like other cloud offerings, SQL Azure is optimized for applications that demand high scalability and high availability, allowing businesses to dial up or scale back databases based on business needs.

The pay-as-you-go platform delivers built-in administration capabilities as well as high availability and fault tolerance. It gives firms the option of using sophisticated database functionality at a lower cost than traditional on-premises SQL Server installations, which would require an investment in server and storage hardware, not to mention personnel with database administration and provisioning expertise.

There’s a lot of interest in SQL Azure, but the larger companies are a little bit shy when it comes to cloud computing in general.

--Herve Roggero, managing partner at Blue Syntax Consulting

ISVs and small and medium-sized firms, inherently more open to the cost and availability advantages of the cloud model while having fewer concerns about data security with this new computing paradigm, have shown interest in SQL Azure. Larger companies, on the other hand, tend to be more cautious, with most still evaluating the risks before making a wholesale commitment to migrate mission-critical business applications to a cloud platform like SQL Azure.

“There’s a lot of interest in SQL Azure, but the larger companies are a little bit shy when it comes to cloud computing in general,” observed Herve Roggero, a managing partner at Blue Syntax Consulting, which provides consulting and development services around the Azure cloud platform, and one of the authors of the book Pro SQL Azure. “They’re still trying to figure out where the cloud fits into their strategies and how to manage risks in the cloud.”

SQL Azure’s high-availability promises
For ISVs and smaller companies looking for a way to achieve scalability and high availability for their applications without big financial investments, SQL Azure can provide huge advantages.

The SQL Azure service-level agreement promises 99.9% uptime, with the entry-level Web edition starting at $9.99 per gigabyte a month (Microsoft offers credit if service falls below its guaranteed uptime). SQL Azure automatically maintains multiple copies of data to support its high-availability promises—a setup that would be impossible for most companies to replicate without making a significant capital outlay.

On top of these capabilities, SQL Azure provides a “no-hassle maintenance environment,”according to Roggero, as it automatically handles hardware provisioning, and database allocation and configuration. And though it’s not 100% backward compatible with traditional SQL Server, SQL Azure honors most traditional RDBMS (relational database management system) statements.

Steve Yi, Microsoft’s director of product management for SQL Azure and middleware, said the strategic rationale for offering SQL Server functionality in the cloud sprung from customers’ need to make data available to external partners and mobile users outside of the corporate firewall.

“Companies have concerns about how they scale capabilities to a potentially unknown number of users and have failover and redundancy … and still mitigate cost,” Yi explained. “It’s a significant capital expenditure to have a lot of additional capacity lying around and operational overhead to ensure systems are highly redundant. We think a cloud database offers benefits to customers from both a corporate and DBA [database administrator] perspective and is a boon for developers as well.”

While Yi declined to release specific numbers on corporate SQL Azure deployments, he said there were many pilots in the evaluation stage.

Beyond security concerns related to cloud computing, there are other issues stopping companies from taking the SQL Azure plunge. International regulations requiring data to be kept within the borders of some European countries is a roadblock for global companies, as are other compliance directives around privacy and security in highly regulated industries like health care and finance.

There are also technical limitations with SQL Azure that remain a barrier, including support for limited data types—for example, XML, cross-database joins and spatial data—and one of the more pressing limitations: the current system’s lack of backup capabilities.

“If you’re putting critical data onto Azure, backup is a critical hole,” said Grant Fritchey, a SQL Server MVP and product evangelist at Red Gate Software, which makes software tools for developers. “The paranoid DBA inside me gets the willies when I think about that—it’s like operating without a net.”

While there are workarounds for backup, like copying a database in the cloud, SQL Azure currently lacks the ability to perform scheduled backups and restores -- something Microsoft’s Yi admitted is a limitation but will be addressed in subsequent releases.

The 50 GB database size limit is another common complaint about the current SQL Azure version.“If there’s anything I see as a downside, it’s probably the size issue right now,” said Jeff Mlakar, SQL lead designer in IT services at Ernst & Young. He said his firm is currently evaluating the technology. “It’s very hard currently to federate data across SQL Azure databases.”

Yi said that’s another area Microsoft plans to address in future SQL Azure iterations. (See sidebar, “On High: SQL Azure Roadmap.”)

In the end, experts like Roggero and Nielsen said the upsides of SQL Azure far outweigh any current disadvantages. Some of what people are complaining about actually have to do less with SQL Azure and more to do with misconceptions about what a cloud database service should provide, Roggero said.

“This is not a database in the cloud -- that sets the wrong expectation about what this can and can’t do,” he explained. “It sets the wrong expectation to think you do the same thing in the cloud that you do for on-premise [database design]. The cloud is actually an opportunity to architect and develop in a different way.” …

Beth continues with an “On High: Microsoft’s SQL Azure roadmap” interview of the SQL Azure Team’s Steve Yi. Read the complete article here.

For more details about sharding and federating SQL Azure databases, see my Build Big-Data Apps in SQL Azure with Federation cover article for Visual Studio Magazine’s March 2011 issue and my Sharding relational databases in the cloud article of 7/7/2011 for SearchCloudComputing.com.

Full disclosure: I’m a paid contributor to SearchCloudComputing.com, another TechTarget publication.


imageSee Ryan Duclos will present an Intro to SQL Azure - Ryan Duclos session on 7/21/2011 at 6:00 to 8:00 PM CDT in Mobile, AL in the Cloud Computing Events section.


• Susan Ibach (@HockeyGeekGirl) posted SQL Azure Essentials for the Database Developer to the Canadian Solution Developer blog on 7/11/2011 (missed when posted):

image I admit it, I am a SQL geek. I really appreciate a well designed database. I believe a good index strategy is a thing of beauty and a well written stored procedure is something to show off to your friends and co-workers. What I, personally, do not enjoy, is all the administrative stuff that goes with it. Backup & recovery, clustering, installation are all important but, it’s just not my thing. I am first and foremost a developer. That’s why I love SQL Azure. I can jump right in to the fun stuff: designing my tables, writing stored procedures and writing code to connect to my awesome new database, and I don’t have to deal with planning for redundancy in case of disk failures, and keeping up with security patches.

imageThere are lots of great videos out there to explain the basics: What is SQL Azure, Creating a SQL Azure Database. In fact there is an entire training kit to help you out when you have some time to sit down and learn. I’ll be providing a few posts over the coming weeks to talk about SQL Azure features and tools for database developers. What I’d like to do today is jump right in and talk about some very specific things an experienced database developer should be aware of when working with SQL Azure.

You can connect to SQL Azure using ANY client with a supported connection library such as ADO.NET or ODBC

This could include an application written in Java or PHP. Connecting to SQL Azure with OLEDB is NOT supported right now. SQL Azure supports tabular data stream (TDS) version 7.3 or later. There is a JDBC driver you can download to connect to SQL Azure. Brian Swan has also written a post on how to get started with PHP and SQL Azure. .NET Framework Data Provider for SQLServer (System.Data.SqlClient) from .NET Framework 3.5 Service Pack 1 or later can be used to connect to SQL Azure and the Entity Framework from .NET Framework 3.5 Service Pack 1 or later can also be used with SQL Azure.

You can use SQL Server Management Studio (SSMS) to connect to SQL Azure

In many introduction videos for SQL Azure they spend all their time using the SQL Azure tools. That is great for the small companies or folks building a database for their photography company who may not have a SQL Server installation. But for those of us who do have SQL Server Management Studio, you can use it to manage your database in SQL Azure. When you create the server in SQL Azure, you will be given a Fully Qualified DNS Name. Use that as your Server name when you connect in SSMS. For those of you in the habit of using Server Explorer in Visual Studio to work with the database, Visual Studio 2010 allows you to connect to a SQL Azure database through Server Explorer.

imageimage

The System databases have changed
  • Your tempdb is hiding – Surprise, no tempdb listed under system databases. That doesn’t mean it’s not there. You are running on a server managed by someone else, so you don’t manage tempdb. Your session can use up to 5 GB of tempdb space, if a session uses more than 5GB of space in tempdb it will be terminated with error code 40551.
  • The master database has changed – When you work with SQL Azure, there are some system views that you simply do not need because they provide information about aspects of the database you no longer manage. For example there is no sys.backup_devices view because you don’t need to do backups (if you are really paranoid about data loss, and I know some of us are, there are ways to make copies of your data). On the other hand there are additional system views to help you manage aspects you only need to think about in the cloud. For example sys.firewall_rules is only available in SQL Azure because you define firewall rules for each SQL Azure server but you wouldn’t do that for a particular instance of SQL Server on premise.
  • SQL Server Agent is NOT supported – Did you notice msdb is not listed in the system databases. There are 3rd party tools and community projects that address this issue. Check out SQL Azure Agent on Codeplex to see an example of how to create similar functionality. You can also run SQL Server Agent on your on-premise database and connect to a SQL Azure database.

SystemDatabases

You don’t know which server you will connect to when you execute a query

When you create a database in SQL Azure there are actually 3 copies made of the database on different servers. This helps provide higher availability, failover and load balancing. Most of the time it doesn’t matter as long as we can request a connection to the database and read and write to our tables. However this architecture does have some ramifications:

  • No 4 part names for queries – Since when you execute a query you do not know which server it will use, 4 part queries that specify the server name are not allowed.
  • No USE command or cross database queries – When you create two databases there is no guarantee that those two databases will be stored on the same physical server. That is why the USE command and cross database queries are not supported.
Every database table must have a clustered index

You can create a table without a clustered index, but you won’t be able to insert data into the table until you create the clustered index. This has never affected my database design because I always have a clustered index on my tables to speed up searches.

Some Features are not currently supported
  • Integrated Security – SQL Server authentication is used for SQL Azure, which makes sense given you are managing the database but not the server.
  • No Full Text Searches – For now at least, full text searches are not supported by SQL Azure. If this is an issue for you, there is an interesting article in the TechNet Wiki on a .NET implementation of a full text search engine that can connect to SQL Azure.
  • CLR is not supported – You have access to .NET through Windows Azure, but you can’t use the .NET to define your own types and functions, but you can still create your own functions and types with T-SQL.
You can connect to SQL Azure from your Business Intelligence Solutions
  • SQL Server Analysis Services - Starting with SQL Server 2008 R2 you can use SQL Azure as a data source when running SQL Server Analysis Services on-premise.
  • SQL Server Reporting Services – Starting with SQL Server 2008 R2, you can use SQL Azure as a data source when running SQL Server Reporting Services on-premise.
  • SQL Server Integration Services – You can use the ADO.NET Source and Destination components to connect to SQL Azure, and in SQL Server 2008 R2 there was a “Use Bulk Insert” option added to the Destination to improve SQL Azure performance.

Today’s My 5 of course has to relate to SQL Azure!

5 Steps to get started with SQL Azure

  1. Create a trial account and login
  2. Create a new SQL Azure server – choose Database | Create a new SQL Azure Server and choose your region (for Canada North Central US is the closest)
  3. Specify an Administrator account and password and don’t forget it! – some account names such as admin, administrator, and sa are not allowed as administrator account names
  4. Specify the firewall rules – these are the IP Addresses that are allowed to access your Database Server, I recommend selecting the “Allow other Windows Azure services to access this server” so you can use Windows Azure services to connect to your database.
  5. Create a Database and start playing – You can either create the database using T-SQL from SSMS, or using the Create New Database in the Windows azure Platform tool which gives you a wizard to create the database.

Now you know the ins and outs, go try it out and come back next week to learn more about life as a database developer in SQL Azure.

Susan is currently working as a developer evangelist for Microsoft Canada.


<Return to section navigation list>

MarketPlace DataMarket and OData

Per Hejndorf asserted WCF Web API: OData format doesn’t mean OData service in a 7/15/2011 post:

image Over the years I must admit that the WCF acronym to me has had the effect of instilling huge amounts of fear and loathing. We have inhouse some really complex stuff that requires gargatuan declarative angle-bracketed configuration monsters that are next to impossible to figure out for most people. One makes changes, fearing both life and mental health. WCF, in short, is one of those Microsoft inventions that make you wonder why you didn’t train to become something else – a plumber for instance.

imageRecently I’ve been playing a bit with the WCF Web Api, however, and it seems like a really easy way to get some REST-ful data over the wire. When you return a collection of data from Web Api, you can elect to return it as an IQueryable in which case you will, as it says, be “Enabling OData query support” .

Mark that phrasing carefully: You are not enabling OData as such – it’s just that you can now use the OData URI syntax when you retrieve data. So don’t fall into the same hole that I did and think that you have full blown OData!

Other than that misconception on my part, it’s all thumbs up for this initiative to focus (and tame) the WCF monster into something more immediately useful. The WCF Data Services (OData) effort is the same story. Now I just have to convince the rest of my fellow devs and customers that the days of heavyweight WCF/SOAP are coming to a an end in a lot (most) scenarios…

PS. I just discovered that we have one SOAP implementation where all data is exchanged with a mobile client as delimited strings, in order to conserve bandwith by eliminating a lot of XML tags!


Natesh reported Maximizer CRM Live Now Offered From the Microsoft Windows Azure Marketplace in a 7/15/2011 post to the TMC.com blog:

image[2] Maximizer Software, a deliverer of Customer Relationship Management (CRM) software and professional services, announced that its Maximizer CRM Live cloud CRM solution can be procured from Microsoft’s Windows Azure Marketplace.

image[6] According to the company, the Maximizer CRM Live is a powerful business productivity solution that is easy to deploy, use and maintain. With its advanced features, robust platform, easy deployment and affordable monthly subscription model, it combines a proven, powerful CRM solution with the simplicity and ease of working in the cloud.

image Officials with the company stated that they are excited to bring 20+ years of experience and proven success with CRM to the Windows Azure Marketplace. The availability of Maximizer CRM Live from the Windows Azure Marketplace will make it even more convenient for customers to access and start experiencing the power of Maximizer’s full-featured, cloud-based CRM solution. …

As the Maximizer CRM Live is a cloud-based solution, there is little technical expertise required and no need to manage and maintain complex hardware infrastructure. Because of its cost-effective nature, it is capable of offering a quick ROI (return on investment), while freeing one’s company to focus on its core competencies. Customizable to fit any industry, Maximizer CRM Live is secure with a 99.5% uptime guarantee. …

The company added that available at $49/user/month, Maximizer CRM Live provides customers with an affordable and high value full featured CRM solution.


<Return to section navigation list>

Windows Azure AppFabric: Apps, Access Control, WIF and Service Bus

Eve Mahler (@xmlgrrl) asserted “With The SCIM Specifications, User Provisioning Goes ‘Zero Trust’” in an introduction to her abstract of a Understanding Simple Cloud Identity Management report (US$495) for Forrester Research on 7/15/2011:

image Business owners are jumping on SaaS services to get quicker wins, and CIOs are finding these services attractive for cutting costs as well. Since it's relatively quick and easy to hook up these services and get going, security and risk professionals struggle to ensure that the correct users obtain — or are denied — access to them.

image Identity provisioning will look quite different in the era of cloud services. Learn how the nascent Simple Cloud Identity Management API will affect your provisioning processes.


The Windows Azure Team (@WindowsAzure) posted Latest Posts to the AppFabric Team Blog on 7/15/2011:

image72232222222Check out the latest posts to the AppFabric Team Blog if you’re interested in learning more about how to develop and manage a Windows Azure AppFabric application. The post, “Developing a Windows Azure AppFabric Application” provides walk-through for developing a Windows Azure AppFabric application that contains a web frontend and a database. The post, “Configuring, deploying, and monitoring applications using AppFabric Application Manager”, describes how to use the Windows Azure AppFabric Application Manager to configure, deploy, and monitor that application.

It appears that the Windows Azure AppFabric Team blog wasn’t consolidated with the Windows Azure Team blog.


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

• Microsoft TechNet published Connecting On-Premises Servers to Windows Azure Roles with Windows Azure Connect on 7/15/2011:

imageWindows Azure Connect provides a simple mechanism to set up IP-based connectivity between on-premises and Windows Azure resources, making it easier for an organization to migrate its existing applications to the cloud. For example, a company can deploy a Windows Azure application that connects to an on-premises SQL Server database or domain-join Windows Azure services to an on-premises Active Directory deployment. In addition, Windows Azure Connect enables remote administration and troubleshooting using the same tools for on-premises applications.

This scenario walks through the steps in learning about, deploying and managing the technology and products to bridge on-premises and Windows Azure systems. It collects our best articles, videos, and training materials with access to all the products you’ll need.

1)  Overview of Windows Azure Connect
Learn which tasks Windows Azure Connect makes easier, and see illustrations of sample configurations in Windows Azure Connect and the Windows Azure Interface.

1)  Windows Azure Platform Security Essentials for Business Decision Makers (Video)
Graham Calladine spends two minutes on the Windows Azure Connect on-prem/off-prem networking bridge (16:15 to 18:05), and answers the most common security concerns business decision-makers have about security of their data in the Windows Azure cloud platform.

1)  Try Windows Azure Connect CTP
To request an invitation to try Windows Azure Connect, please visit the Beta programs section of the Windows Azure Portal. You must have Windows Azure platform account .

1)  Getting Started with Windows Azure Connect
Learn how to activate Windows Azure roles, install local endpoint software on computers or virtual machines, and create and configure a group of endpoints.

1)  Setting up Windows Azure Connect
Follow a step-by-step walkthrough of how to set up network connectivity between a Windows Azure service, its roles and underlying role instances, and a set of local machines.

1)  Search TechNet Forums for Windows Azure Connect Deployment Discussions
There’s a community of IT Pros waiting to help you.

1)  Secure Networking using Windows Azure Connect (Video)
Graham Calladine, Security Architect with Microsoft Services, describes potential usage scenarios including joining your cloud-based virtual machines to Active Directory, and more.

1)  Security Essentials for Technical Decision Makers (Video)
Graham Calladine spends 4-1/2 minutes on Windows Azure Connect (30:48 to 35:15) and answers the most common security concerns.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Rinat Abdullin (@abdullin) described improving performance on Windows Azure in a Lokad.CQRS - Getting Simpler and Faster post of 7/17/2011:

image Recently I've been pushing a few changes into Lokad.CQRS trunk, as required by current Lokad projects. Two major ones are:

  • easy consumption of messages from completely separate projects;
  • lambda dispatch.
Multi-verse consumption

The most important improvement is to simplify consumption of messages coming from different systems. Consider situation, when project C consumes messages from projects A and B. Each project is in essence it's own universe with it's own dependencies and development speed. Message contracts are distributed in binary form of A.Contracts.dll, B.Contracts.dll etc.

based on our experience at Lokad we strongly recommend to ensure that these contract libraries should depend only on System.* dlls. Otherwise managing multiple projects will become time-consuming task.

Problem was related to the fact, that given:

  • project A defines its own base interface IMessageA for message classes;
  • project B defines its own base interface IMessageB for message classes.

in the project C we can't define a consumer signature that will handle both base interfaces.

As it turns out, the solution is quite simple. We can define polymorphic dispatcher to use object as a base message class (could be done on the latest):

mdm.HandlerSample<IMyConsume<object>>(s => s.Consume(null));

And then actual handler could be defined as:

public class SomeHandler : IMyConsume<ProjectAMessage>, IMyConsume<ProjectBMessage>

This small refactoring actually allowed me to decouple polymorphic dispatcher from the actual dispatch process. The latter simplified things a lot and enabled dead-simple performance optimization.

More than that, it allowed to move forward with our current projects within Lokad (where we have on average less than one developer per active project and thus can't waste any time on unnecessary friction).

What is polymorphic dispatcher? It is that complicated piece of code in Lokad.CQRS that allows you to define handlers as classes that inherit from various consuming interfaces (i.e.: IConsume[SpecificMessage] or IConsume[IMessageInterface]), while correctly resolving them through the IoC Container, handling scopes and transactions as needed.

Lambda Dispatch

Lokad.CQRS is rather simple (unless Azure slows it down) and hence it can be fast. Since we no longer have extremely tight coupling with the polymorphic complexity, we can dispatch directly to a delegate, which has form of lambda:

Action<ImmutableEnvelope> e => /* do something */

This is implemented as another dispatcher type available for all Lokad.CQRS partitions.

c.Memory(m =>
{
m.AddMemorySender("test");
m.AddMemoryProcess("test", x => x.DispatcherIsLambda(Factory));
}

static Action<ImmutableEnvelope> Factory(IComponentContext componentContext)
{
var sender = componentContext.Resolve<IMessageSender>();
return envelope => sender.SendOne(envelope.Items[0].Content);
}

Note, that we are actually using container once here, while building the lambda. Aside from that, there is neither reflection nor container resolution. This allows to get better performance. Below are some test results. First number goes for classical polymorphic dispatch, second one - for lambdas. MPS stands for messages per seconds.

By the way, you can obviously combine different types of dispatchers in a single Lokad.CQRS host.

Throughput performance test (we fill queue with messages and then consume them as fast as possible in 1 thread, doing all proper ACK stuff):

  • Azure Dev Store (don't laugh) - 11 / 11 mps
  • Files - 1315 / 1350 mps
  • Memory - 79000 / 101000 mps

By the way, memory queues are equivalent of non-durable messaging, while file queues are durable.

Reaction time test (single thread sends message to itself, then pulls this message from the queue etc):

  • Azure Dev Store: 8 / 8.4 mps
  • Files - 817 / 908 mps
  • Memory - 14700 / 44000 mps

First of all, You shouldn't worry much about relatively slow performance of local Azure Queues (that's what happens when you use take the complex path). Production performance should be an order of magnitude faster for a single thread. Besides, File and Memory partitions were added precisely to compensate for this slow performance while reducing development friction for Windows Azure Platform.

BTW, there is a chance that we can come up with better queues for Azure, than Azure Queues, drawing inspiration from LMAX circular architecture and relying on the inherent Lokad.CQRS capabilities.

Second, while looking at Files/Memory performance on a single thread, I would say that this is not so bad for code that was not performance optimized for the sake of staying simple.

Caveat: this performance applies to local operations only. True tests should cover network scenario that matches your production development (preferably with a chaos monkey in the house).

Probably if we use RabbitMQ or ZeroMQ, throughput can be even better than files out-of-the box.

Third, performance improvement is just a side effect. This refactoring was actually driven by desire do simplify the core (potentially getting rid of IoC container), provide much better support for extremely simple handler composition (based on lambdas), event sourcing and strongly-typed pipes.

For example, you can actually wrap all your command handlers (this applies even for polymorphic dispatch of commands/events) with some method that apply to them all (i.e.: logging, audit, auth etc). You can check this video of Greg Young to get a hint of things where we are going from here.


PRNet-USA reported Panorama Brings Enterprise BI to the Cloud Using Windows Azure in a 7/15/2011 press release:

image_thumb3Panorama Software, a global leader in proactive Business Intelligence (BI) solutions, announced today its socially-enabled BI platform Necto is now interoperable with the Windows Azure cloud-based application platform.

Panorama Necto embodies Web 2.0 communication by allowing users throughout the enterprise to interact within a social, relevant, and user-centric BI platform. Azure is a cloud-based application for the development and management of off-site applications.

Able to support thousands of enterprise users, Necto using the Azure platform allows businesses to enjoy a sophisticated end-to-end BI solution without the costs and time required for an on-premise solution.

"Panorama's Necto BI solution on top of the Windows Azure cloud platform gives enterprise clients the power of social intelligence and our Automated Relevant Insights with the scalability of the Azure cloud platform," said Eynav Azarya, CEO of Panorama Software. "We are very pleased to present our latest solution with Microsoft, and anticipate considerable enterprise adoption of Necto using the Azure platform due to the substantial cost and performance benefits."

"Time and again, we see innovations from Panorama that add value for our end users," said Kim Akers, General Manager for Global ISV partners at Microsoft Corp. "Necto is a web-based BI solution that takes social media from a personal context and introduces it to the enterprise business realm. Windows Azure provides Necto users with flexibility in a security-enhanced cloud environment."

About Panorama Software:

Panorama Software empowers individuals and global organizations with the ability to rapidly analyze data, identify trends, maximize business opportunities and improve corporate performance and results through a complete SaaS and on-premise BI solution.

Panorama Necto™ is the industry's first socially-enabled Business Intelligence solution that offers a new way to connect data, insights, and people in the organization. The patent-pending solution represents a new generation of BI that enables enterprises to leverage the power of Social Intelligence to gain insights more quickly, more efficiently, and with greater relevancy.

Founded in 1993, Panorama is a leading innovator in Online Analytical Processing (OLAP) and Multidimensional Expressions (MDX). Panorama sold its OLAP technology to Microsoft Corporation in 1996; the technology was rebranded as SQL Server Analysis Services and integrated into the SQL Server platform. Panorama supports over 1,600 customers worldwide in industries such as financial services, manufacturing, retail, healthcare, telecommunications and life sciences. Panorama has a wide eco-system of partners in 30 countries, and maintains offices throughout North America, EMEA and Asia . Visit our website to learn more about Panorama's Business Intelligence Solutions .


MarketWire reported VRX Worldwide Inc.: MediaValet Demonstrated by Microsoft GM During Keynote at Worldwide Partner Conference 2011 in a 7/14/2011 press release:

image VRX Worldwide Inc. announces that MediaValet(TM), the digital asset management system launched by its wholly-owned subsidiary VRX Studios, was one of three cloud applications introduced to present their experience on stage by Kim Akers, General Manager for global ISV partners at Microsoft, in front of 1,500 attendees during a keynote held Monday at the Worldwide Partner Conference (WPC 2011) - the worldwide annual gathering for Microsoft partners.

image"Taking advantage of the Windows Azure infrastructure, scalability and flexibility, David and his team have focused all their efforts on developing a first-class application," comments Kim Akers, general manager for global ISV partners at Microsoft Corp. "MediaValet has truly embraced all that Windows Azure can do to help them deliver a Cloud-based Digital Asset Management System that has enormous potential for customers in a wide range of industries."

David MacLaren, President and CEO of MediaValet and VRX Studios, spoke about their futile search for an existing Digital Asset Management (DAM) System, how they learned about cloud computing and that Windows Azure met their infrastructure needs. Having developed MediaValet, a 100% cloud-based, enterprise class, truly global, SaaS, DAM System, for their own use, the team quickly found that current customers and entire new markets were also looking for the same solution. Today, MediaValet is a stand-alone product available to all companies, in all industries.

"Gaining this kind of recognition by Microsoft at such a high-profile event has truly launched MediaValet onto the world stage," comments David MacLaren, President and CEO of VRX Studios and MediaValet. "We greatly appreciate the attention by Kim Akers and the belief Microsoft has in our DAM system."

David had also been invited by Microsoft to speak on a roundtable discussion. He joined top Independent Software Vendor (ISV) executives on Monday to speak about "Moving from Software to 'Software as a Service' (SaaS)": roundtable hosted by Yvonne Muench (Microsoft) - Panellists: Arthur Berrill (DMTI Spatial), David MacLaren, Markus Eilers (Runtime Software).

The Worldwide Partner Conference, which started with a keynote by Steve Ballmer, CEO of Microsoft, on Monday morning, continues until Thursday, July 14, 2011.

About MediaValet

MediaValet is a 100% cloud-based, enterprise class, truly global, Digital Asset Management system available to all companies in all industries as a stand-alone, SaaS, web-based application. MediaValet makes it effortless for companies with offices, staff, suppliers and customers to aggregate, organize, maximize the return on their digital assets no matter what computer or browser they're using.

ABOUT VRX STUDIOS

Through a decade of growth, innovation and an unwavering commitment to quality, consistency and customer service, VRX Studios is the leading provider of Photography, Content Management and Licensing services to the global hospitality and travel industries. Through its comprehensive suite of products, covering Architectural, Destination, Food and Beverage and Lifestyle Photography, Content Management, Distribution and Licensing, VRX helps hospitality and travel companies alike capture and showcase their brands to the world. To find out more about VRX Studios, its products and services, visit http://www.vrxstudios.com , http://www.mediavalet.co , email info@vrxstudios.com or call 1.888.605.0059. To find out more about VRX Worldwide Inc., visit www.vrxworldwide.com .


Ryan Dunn posted How to Diagnose Windows Azure Error Attaching Debugger Errors on 7/14/2011 (missed when posted):

I was working on a Windows Azure website solution the other day and suddenly started getting this error when I tried to run the site with a debugger:

image

image_thumb1This error is one of the hardest to diagnose. Typically, it means that there is something crashing in your website before the debugger can attach. A good candidate to check is your global.asax to see if you have changed anything there. I knew that the global.asax had not been changed, so it was puzzling. Naturally, I took the normal course of action:

  1. Run the website without debug inside the emulator.
  2. Run the website with and without debugging outside the emulator.
  3. Tried it on another machine

imageNone of these methods gave me any clue what the issue was as they all worked perfectly fine. It was killing me that it only happened on debugging inside the emulator and only on 1 machine (the one I really wanted to work). I was desperately looking for a solution that did not involve rebuilding the machine. I turned on SysInternal's DebugView to see if there were some debug messages telling me what the message was. I saw an interesting number of things, but nothing that really stood out as the source of the error. However, I did notice the process ID of what appeared to be reporting errors:

image

Looking at Process Explorer, I found this was for DFAgent.exe (the Dev Fabric Agent). I could see that it was starting with an environment variable, so I took a look at where that was happening:

image

That gave me a direction to start looking. I opened the %UserProfile%\AppData\Local\Temp directory and found a conveniently named file there called Visual Studio Web Debugger.log.

image

A quick look at it showed it to be HTML, so one rename later and viola!

image

One of our developers had overridden the <httpErrors> setting in web.config that was disallowed on my 1 machine. I opened my applicationHost.config using a Administatrive Notepad and sure enough:

image

So, the moral of the story is next time, just take a look at this log file and you might find the issue. I suspect the reason that this only happened on debug and not when running without the debugger was that for some reason the debugger is looking for a file called debugattach.aspx. Since this file does not exist on my machine, it throws a 404, which in turn tries to access the <httpErrors> setting, which culminates in the 500.19 server error. I hope this saves someone the many hours I spent finding it and I hope it prevents you from rebuilding your machine as I almost did.


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

image222422222222No significant articles today.


Return to section navigation list>

Windows Azure Infrastructure and DevOps

Bruce Kyle reported Partners Deploying Solutions on Azure Generate 20% to 250% New Revenue from WPC 2011 in a 7/16/2011 post to the US ISV Evangelism blog:

imageA recently Microsoft-commissioned study conducted by Forrester Consulting found that software partners deploying solutions on Windows Azure are generating 20 percent to 250 percent in new revenue by reaching entirely new customers.

imageForrester interviewed six ISVs that had developed applications on the Windows Azure platform, which includes service offerings of Windows Azure, Microsoft SQL Azure, and Windows Azure AppFabric. Based on the data gathered from these ISVs, Forester constructed a framework for evaluating the potential revenues, pricing strategies, and expenses associated with developing and selling cloud-based software-as-a-service (SaaS) applications.

imageISVs are able to reuse up to 80% of their existing .NET code when moving to Windows Azure.

The paper provides details on the business impact on how Windows Azure enables rapid implementation of Cloud applications.


The Windows Azure OS Updates Team posted Windows Azure Guest OS 1.14 (Release 201105-01) on 7/15/2011:

imageThe following table describes release 201015-01 of the Windows Azure Guest OS 1.14:

Friendly name Windows Azure Guest OS 1.14 (Release 201105-01)
Configuration value WA-GUEST-OS-1.14_201105-01
Release date July 15, 2011
Features Stability and security patch fixes applicable to Windows Azure OS.

Security Patches


This release includes the following security patches, as well as all of the security patches provided by previous releases of the Windows Azure Guest OS:

Bulletin ID Parent KB Vulnerability Description
MS11-035 2524426 Vulnerability in WINS Could Allow Remote Code Execution
  2478063 Microsoft .NET Framework 4 Platform Update 1 - Runtime Update

Windows Azure Guest OS 1.14 is substantially compatible with Windows Server 2008 SP2, and includes all Windows Server 2008 SP2 security patches through May 2011.

noteNote: When a new release of the Windows Azure Guest OS is published, it can take several days for it to fully propagate across Windows Azure. If your service is configured for auto-upgrade, it will be upgraded sometime after the release date, and you’ll see the new guest OS version listed for your service. If you are upgrading your service manually, the new guest OS will be available for you to upgrade your service once the full roll-out of the guest OS to Windows Azure is complete.


The Windows Azure OS Updates Team posted Windows Azure Guest OS 2.6 (Release 201105-01) on 7/15/2011:

imageThe following table describes release 201015-01 of the Windows Azure Guest OS 2.6:

Friendly name Windows Azure Guest OS 2.6 (Release 201105-01)
Configuration value WA-GUEST-OS-2.6_201105-01
Release date July 15, 2011
Features Stability and security patch fixes applicable to Windows Azure OS.

Security Patches


This release includes the following security patches, as well as all of the security patches provided by previous releases of the Windows Azure Guest OS:

Bulletin ID Parent KB Vulnerability Description
MS11-035 2524426 Vulnerability in WINS Could Allow Remote Code Execution
  2478063 Microsoft .NET Framework 4 Platform Update 1 - Runtime Update

Windows Azure Guest OS 2.6 is substantially compatible with Windows Server 2008 R2, and includes all Windows Server 2008 R2 security patches through May 2011.

noteNote: When a new release of the Windows Azure Guest OS is published, it can take several days for it to fully propagate across Windows Azure. If your service is configured for auto-upgrade, it will be upgraded sometime after the release date, and you’ll see the new guest OS version listed for your service. If you are upgrading your service manually, the new guest OS will be available for you to upgrade your service once the full roll-out of the guest OS to Windows Azure is complete.

You can tell it was a slow news day when I run two minor OS upgrade posts in full.


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

•• Adam Hall described Installing the Operations Manager 2012 Beta! in a 7/17/2011 post to the Microsoft System Center blog:

imageThe Operations Manager 2012 Beta is coming real soon! There are a lot of significant new capabilities, including Network Monitoring and of course my particular area of interest, Application Performance Management.

Below you will see the step-by-step process for installing the Operations Manager Beta using the new installer routine.

Choose your installation type. The big Install button at the top works for me, but you can also perform a local agent install, or configure other components such as the Audit Collection and Gateway Management Server.

image

Accept the EULA.

image

Select the features to install. This is for my demo server so it’s all going on one server with local SQL.

image

Choose where to install the bits.

image

A prerequisites check is then performed.

image

As you can see below, I am missing some Server roles and components, so I need to install these first.

image

After doing that I recheck, and now I can proceed. I am using Dynamic Memory so get a warning.

image

This is a new environment so I am going to create a new Management Group, or if this is an addition to an existing environment you can join it here.

image

Configure the Database. I am running SQL 2008 R2 locally on my demo server, or you can join a remote SQL server as desired.

image

Configure the Data Warehouse, again I am running this locally.

image

I have all components of SQL 2008 R2 running local, so I can choose my local reporting services instance. Again, you can use a remote instance.

image

I’m just going to use the default web site for the web console. This is just for demo and running in an isolated environment, but you should use SSL to protect your credentials and information.

image

Choose an authentication method based on your use case scenario.

image

Configure your service accounts. Typically this should be dedicated service accounts, but I am in demo mode and everything is Domain Administrator Smile

image

We really value the telemetry we get back, it helps us build better products.

image

A summary is provided, and then the Installation begins!

image

You can monitor the progress …

image

And then be notified when it’s complete and ensure it all went smoothly.

image

So that was a walk through the Operations Manager 2012 Beta installation routine. I will post in the next few weeks information, scenarios and demos of the Application level functionality and use cases.


Scott Bekker (@scottbekker) posted Azure Appliance Fades From Limelight at WPC to the RedmondMag.com blog on 7/14/2011 (missed when posted):

image Microsoft promoted its Azure Platform Appliance at last year's Worldwide Partner Conference in Washington, D.C., but the boxcar-size datacenter was just a bit player at this year's event, with no stage presence.

image

A year later, the Azure Appliance played a strictly backstage role at the 2011 WPC in Los Angeles this week. While the Appliance was absent, however, at least it was mentioned.

image Microsoft and its Appliance partners -- HP, Dell, Fujitsu and eBay -- had been mostly tight-lipped about the Appliance in the year between the conferences even though executives had said last July that services based on them would start becoming available in late 2010.

Last year, Microsoft and OEM executives said the Appliances would initially consist of Windows Azure, SQL Azure and the Microsoft-specified configuration of nearly 900 servers, along with storage and networking gear. Microsoft would remotely manage the Appliances and provide platform software updates.

With little public discussion of the Appliances in the interim and with two of the boxes' key public advocates -- Bob Muglia and Ray Ozzie -- gone from Microsoft, the future of the devices was very much in doubt. Meanwhile, Microsoft has recently ramped up emphasis on the related concept of private cloud, which is more a software play and more in line with Microsoft's traditional strengths.

While no Appliances were on display at this year's Dell, Fujitsu, HP or Microsoft booths, Microsoft did confirm that work is continuing on the joint projects.

In a blog post by the Server and Tools Business team on Tuesday, detailed progress by Fujitsu, HP and eBay. According to the blog:

  • Fujitsu announced in June that they would be launching the Fujitsu Global Cloud Platform (FGCP/A5) service in August 2011, running on a Windows Azure Platform Appliance at their datacenter in Japan. By using FGCP/A5, customers will be able to quickly build elastically-scalable applications using familiar Windows Azure platform technologies, streamline their IT operations management and be more competitive in the global market. In addition, customers will have the ability to store their business data domestically in Japan if they prefer.
  • HP also intends to use the appliance to offer private and public cloud computing services, based on Windows Azure. They have an operational appliance at their datacenter that has been validated by Microsoft to run the Windows Azure Platform and they look forward to making services available to their customers later this year.
  • eBay is in the early stages of implementing on the Windows Azure platform appliance and has successfully completed a first application on Windows Azure (ipad.ebay.com). eBay is continuing to evaluate ways in which the Windows Azure platform appliance can help improve engineering agility and reduce operating costs.

Missing from the blog statement is any mention of Dell. In an interview on Wednesday, Microsoft Corporate Vice President of the Worldwide Partner Group Jon Roskill confirmed that Dell was still working on an appliance. Roskill also contended that the message from Microsoft and the OEM partners about the availability timeline for the Appliances at the 2010 WPC was more nuanced than was generally reported.

Full disclosure: I’m a contributing editor for Visual Studio Magazine another 1105 Media property.


<Return to section navigation list>

Cloud Security and Governance

No significant articles today.


<Return to section navigation list>

Cloud Computing Events

•• Bruce Kyle reported 200 Sessions Announced for SharePoint Conference 2011, as well as a SharePoint 2010 and Windows Azure Bootcamp, in a 7/17/2011 post to MSDN’s US ISV Evangelism blog:

imageSharePoint Conference 2011 is your only opportunity this year to see over 200 sessions focused on SharePoint 2010 and related technologies both in the cloud and on-premises. All current session content including abstracts and speakers have been posted to www.mssharepointconference.com!

The conference will be held in Anaheim, CA on October 3 – 6.

imageThis year’s conference will have a breadth of both technical and non-technical sessions and will have suitable topics for everyone regardless if you are new to SharePoint 2010 or working on continuing your SharePoint education.

SPC11 will provide you with the training, insight, and networking you need to develop, deploy, govern and get the most from SharePoint. You’ll also hear from Microsoft Engineers, Product Managers, MCMs and MVPs who will discuss topics such as cloud services, best practices and real world project insights.

imageDon’t miss your chance to attend and learn from these sessions by registering now! Conference registration is only $1,199 and seats are selling fast with only 2.5 months until the event!

Pre & Post Conference Training Opportunities

SharePoint Conference announces five ancillary conference training opportunities with limited space! Act fast before space sells out!

To view more information on each session below click on the training title to learn more about the session including abstracts, agendas, speakers, costs and maximum attendance. Register now to reserve your seat!

Sunday, October 2
Thursday, October 6 (2pm-6pm)
Friday, October 7

Rightscale will present a Scaling SQL and NoSQL Databases in the Cloud Webinar on 7/21/2011 at 11:00 AM PT:

Webinar Overview

image The number one cause of poor scalable web application performance is the database. This problem is magnified in cloud environments where I/O and bandwidth are generally slower and less predictable than in dedicated data centers. Database sharding is a highly effective method of removing the database scalability barrier by operating on top of proven RDBMS products such as MySQL and Postgres as well as the new NoSQL database platforms.

In this webinar, you'll learn what it really takes to implement sharding, the role it plays in the effective end-to-end lifecycle management of your entire database environment, and why it is crucial for ensuring reliability.

In this webinar, we will:

  • Guide you on how to choose the best technology for your specific application
  • Show you how to shard your existing database
  • Review a case study on a Top 20 Facebook application built on dbShards
Speakers
  • image Cory Isaacson - CEO, CodeFutures
  • Uri Budnik - Director, ISV Partner Program, RightScale
  • image Dave Blinder, CTO of Family Builder

Ryan Duclos will present an Intro to SQL Azure - Ryan Duclos session on 7/21/2011 at 6:00 to 8:00 PM CDT in Mobile, AL:

image

imageSQL Azure is part of the Windows Azure platform: a suite of services providing hosted computing, infrastructure, Web services, Reporting services and data services. The SQL Azure component provides the full relational database functionality of SQL Server, but it also provides functionality as a cloud-computing service, hosted in Microsoft datacenters around the globe. We will go over how it works and what it has to offer.

Click here for more details.


<Return to section navigation list>

Other Cloud Computing Platforms and Services

•• My (@rogerjenn) Migrate Access 2000 or Later Databases to Public or Private Rollbase Clouds post of 7/17/2011 also includes links to details of migrating SalesForce.com and Force.com projects to Rollbase:

imageNow that Office 365 has been released to the Web (RTW) for commercial use, there is considerable interest in taking advantage of SharePoint Online’s Access Services to create Web-based data management applications (Access Web Databases) at a monthly cost of US$6 per user.

imageAn advantage of this approach is that Access Services supports migration of tables, queries, forms and macros to SharePoint lists, Web pages and workflows. Alternatively, you can move just the Access tables to SharePoint Online and link them to on-premises Access front-ends. Optional local data caching improves data access performance and enables offline data entry. You can learn more about migrating Access 2010 applications to SharePoint Online in my May 2011 Webcast.

imageNote: The current version of Office 365’s Access Services doesn’t support reports. If you need printed reports, Access Hosting offers hosted SharePoint 2010 for up to 10 users at a flat rate of US$99 per month. Access Hosting offers many advantages over SharePoint online, as described here (scroll down.) My Upsizing the Northwind Web Database to an Updated SharePoint 2010 Server Hosted by AccessHosting.com post contains links to my March 2011 Webcast about the topic.

imageRollbase is a cloud application platform, which includes a wizard for importing Access 2000 or later *.mdb and Excel *.xls and *.csv files into Rollbase Objects and Fields to create Rollbase Applications for the Web. Rollbase’s primary claims to fame are its reported capability to import Salesforce and Force.com applications and availability in public and private cloud versions. There’s no indication on the Rollbase site of support for importing Access objects other than tables. Hosted applications are US$15 per month per user.


•• Alex Feinberg (@strlen) published a high-level analysis of Replication, atomicity and order in distributed systems (primarily NoSQL) to a GitHub blog on 7/17/2011:

imageDistributed systems are an increasingly important topic in Computer Science. The difficulty and immediate applicability of this topic is what makes it distributed systems to rewarding to study and build.

imageThe goal of this post (and future posts on this topic) is to help the reader develop a basic toolkit they could use to reason about distributed systems. The hope is to help the reader see the well known patterns in the specific problems they’re solving, to identify the cases where others have already solved the problems they’re facing and to understand the cases where solving hundred percent of the problem may not be worth the effort.

Leaving a Newtonian universe

For the most part, a single machine is a Newtonian universe: that is, we have a single frame of reference. As a result we can impose a total Happened-Before order on events i.e., we can always tell that one event happened before another event. Communication can happen over shared memory, access to which can be synchronized through locks and memory barriers1.

When we move to a client and server architecture, message passing architecture is required. In the case of a single server (with one or more clients), we can still maintain an illusion of a Newtonian universe: TCP (the transport layer used by popular application protocols) gives a guarantee that packets will be delivered to the server in the same order that they were sent by a client. As we’ll later see, this guarantee can be used as a power primitive upon which more complex guarantees can be set.

However, there are two core reasons why we no longer want to run an application on a single server: in recent times it has become consensus that reliability, availability and scalability are best obtained using multiple machines. Mission critical applications must at least maintain reliability and availability; in the case of consumer (and even many enterprise) web applications, with success often come scalability challenges. Thus, it’s inevitable that we leave Newton’s universe and enter Einstein’s2.

1 This is not to belittle the fascinating challenge of building parallel shared memory systems: the topic is merely very well covered and outside of this post. I highly recommend The Art of Multiprocessor Programming (by Maurice Herlihy) and Java Concurrency In Practice (Goetz, Lea et al) to those interested in shared memory concurrency.

2 The comparison with theory of relativity is not original: Leslie Lamport and Pat Helland have used this comparison. Several concepts in distributed systems such as Vector Clocks and Lamport Timestamps are explicitly inspired by Theory of Relativity.

Intuitive formulation of the problem

Suppose we have a group of (physical or logical) nodes: perhaps replicas of a partition (aka a shard) of a shared nothing database, a group of workstations collaborating on a document or a set of servers running a stateful business application for one specific customer. Another group of nodes (which may or may not overlap with the first group of nodes) is sending messages the first group. In the case of a collaborative editor, a sample message could be “insert this line into paragraph three of the document”. Naturally, we would like these messages delivered to all available machines in the first group.

Question is, how do we ensure, that after the messages are delivered to all machines, that the machines remain in the same state? In the case of our collaborative editor application, suppose Bob is watching Alice type over the shoulder and sees her type “The” and types “quick brown fox” after: we’d like all instances of the collaborative editor to say “The quick brown fox” and not “quick brown fox The”; nor do we want messages delivered multiple times e.g., not “The The quick brown fox” and especially not “The quick brown fox The”!

We’d also very much like (or, in many cases, require) that if one of the servers goes down, the accumulated state is not lost (reliability). We’d also like to be able to view the state in the case of server failures (read availability) as well as continue sending messages (write availability). When a node fails, we’d also like to be able to add a new nodes to take its place (restoring its state from other replicas). Ideally, we’d like the latter process to be as dynamic as possible.

We’d also like for this to have reasonable performance guarantees. In the case of the collaborative editor, we’d like characters to appear on the screen seemingly immediately after they are typed; in the case of the shared nothing database, we’d like to reason about performance not too differently from from how we reason about single node database performance i.e., determined (in terms of both throughput and latency) primarily by the CPU, memory, disks and ethnernet. In many cases we’d like our distributed systems to even perform better than analogous single node systems (by allowing operations to be spread across multiple nodes), especially under high load.

Problem is, however, that these goals are contradictory.

State machines, atomic multicast and consensus

The approach we’ve just described is state machine replication. This was first proposed by Leslie Lamport (also known as the author of LaTeX_), in the paper "_Time, Clocks and the Ordering of Events in a Distributed System":http://research.microsoft.com/en-us/um/people/lamport/pubs/pubs.html#time-clocks. The idea is that if we model each node in a distributed system as a state machine, and send the same input (messages) in the same order to each state machine, we will end up in the same final state.

This leads to our next question, how do we ensure that the same messages are sent to each machine, in the same order? This problem is known as atomic broadcast or more generally atomic multicast. We should take special care to distinguish this from the IP multicast protocol which makes no guarantees about order or reliability of messages: UDP, rather than TCP is layered on top of it.

A better way to view atomic multicast is a as a special case of the publish subscribe pattern (used by message queing systems such as ActiveMQ, RabbitMQ, Kafka and Virtual Synchrony based systems such as JGroups and Spread 3).

A generalization of this problem is the distributed transaction problem: how we do ensure that either all nodes execute the exact same transaction (executing all operations in the same order), or none do?

Traditionally two phase commit%phase%commit (2PC) algorithm has been used for distributed transactions. The problem with two phase commit is that it isn’t fault tolerant: if the coordinator node fails, the process is blocked until the coordinator is repaired (Consensus on Transaction Commit)

Consensus algorithms solve the problem of how multiples nodes could arrive at a commonly accepted value in the process of failures. We can use consensus algorithm to build fault tolerant distributed commit protocols by (this is somewhat of an over-simplification) having nodes “decide” whether or not a transaction has been committed.

3 Virtual synchrony (making asynchronous systems appear as synchronous) is itself a research topic that is closesly related to and at times complemented by consensus work. Ken Birman’s group at Cornell has done a great deal of work on it. Unfortunately, it was difficult to work much of this fascinating research into a high level blog post.

Theoretic impossibility, practical possibility

Problem is that it’s impossible to construct a consensus algorithm that will terminate in a guaranteed time-bound in an asynchronous system lacking a common clock: this is known (after the Fisher, Lynch, Patterson) as the FLP impossibility result. Eric Brewer’s CAP theorem (a well covered topic) can be argued to be an elegant and intuitive re-statement of the FLP.

In practice, however, practical consensus algorithms can be constructed with reasonable liveness properties. It does, however, imply that consensus should be limited in its applications.

One thing to note is that consensus protocols can typically handle simple or clean failures (failures of minority of nodes), at the cost of greater latency: handling more complex (split brain scenarios) where a quorum can’t be reached is more difficult.

Paxos and ZAB (Chubby and ZooKeeper)

The Paxos Consensus and Commit protocols are well known and are seeing greater production use. A detailed discussions of these algorithms is outside the scope of this post, but it should be mentioned that practical Paxos implementations have somewhat modified the algorithms to allow for greater liveness and performance.

Google’s Chubby service is a practical example of a Paxos based system. Chubby provides a file system-like interface and is meant to be used for locks, leases and leader elections. One example of use of Chubby (that will be discussed in further detail in the next post) is assigning mastership of partitions in a distributed database to individual nodes.

Apache ZooKeeper is another practical example of a system built on a Paxos-like distributed commit protocol. In this case, the consensus problem is slightly modified: rather than assume a purely asynchronous network, the TCP ordering guarantee is taken advantage of. Like Chubby, Paxos exposes a file-system like API and is frequently used for leader election, cluster membership services, service discovery and assigning ownership to partitions in shared nothing stateful distributed systems.

Limitations of total transactional replication

A question arises: why are transactional replication only used for applications such as cluster membership, leader elections and lock managers? Why aren’t these algorithms used for building distributed systems themselves? Wouldn’t we all like a fully transactional, fault tolerant, multi-master distributed database? Wouldn’t we like message queues that promise to distributed exactly the same messages, to exactly the same nodes, in exactly the same order, delivering each message exactly once at the exact same time?

The above mentioned FLP impossibility result provides one limitation of these systems: many practical systems require tight latency aguarantees in even in the light of machine and network failures. The Dangers of Replication and a Solution also discusses scalability issues such as increases in network traffic, potential deadlocks in what the authors called “anywhere-anytime-anyway transactional replication”.

In the case of Chubby and ZooKeeper, this is less of an issue: in a well designed distributed system, cluster membership and partition ownership changes are less frequent than updates themselves (much lower throughput, less of a scalability challenge) and are less sensitive to latency. Finally, by limiting our interaction with consensus based systems, we are able to limit the impact of scenarios of where consensus can’t be reached due to machine, software or network failures.

What’s next?

The next post will look at alternatives to the common alternatives to total transactional replication as well as several (relatively recent) papers and systems that do apply some transactional replication techniques at scale.


Carl Brooks (@eekygeeky) posted VMware vCloud Director 1.5: Small but definitive step forward to the SearchCloudComputing.com blog on 7/15/2011:

imageAlong with the release of vSphere 5, widely acclaimed as a technical success and a potential licensing crisis, VMware has unveiled vCloud Director 1.5 (vCD 1.5). Both are not available yet, but the details have been released. Users hail it as a great start to getting a viable private cloud from VMware.

imageOverall, however, vCD deployment remains very low, with lighthouse cases in some enterprise test and dev environments and most traction at service providers. Private organization adoption is sparse; one person listed as a vCloud Director customer by VMware and contacted for this story did not know if they were using the product and thought a former graduate student may have experimented with it at some point.

imageThat might begin to change since, most importantly, deployment options for vCloud Director (vCD) have changed. It still needs to be deployed to a physical host and can't run as a virtual machine (VM), but it now supports Microsoft SQL 2005 and 2008 databases for a backend, and VMware promises more database support to come. vCloud Director 1.0 required a full Oracle database all to itself, a high barrier to adoption in and of itself.

"To be honest, the biggest thing that stopped us going forward was the Oracle licensing," said Gurusimran Khalsa, systems administrator for the Human Services Division of the State of New Mexico. He said his agency had already endured several years of consolidation and virtualization and vCloud Director looked attractive. HSD even bought a few licenses to experiment with but never used them because the requirement for Oracle was something the division had successfully dodged in the past and wasn’t about to sacrifice for vCD.

Khalsa, who oversees about 300 VMs on 4 or 5 physical hosts, said vCloud Director looked like a great idea on paper. He wants to use it for development and said his division had looked at VMware Lab Manager in the past but hadn't bought it in. With the new features, HSD will begin using those old vCloud Director licenses and begin testing its capabilities in short order, said Khalsa. "We're really interested in it from a lab manager standpoint," he said. "We've got a lot of development going on and a lot more coming.”

Steve Herrod, VMware’s CTO said setting up big labs was the most popular use case for vCloud Director among enterprise IT organizations thus far.

Khalsa said much of his workload is running SQL servers, Web applications and Web servers. He said the biggest potential pain in the neck when implementing vSphere 5 and vCloud Director is going to be making sure his third party management tools, HyTrust, Altor and Xangati, don't come unglued. vCloud Director and vSphere 5 aren't anywhere close to replacing the functionality of those tools, said Khalsa.

He also thinks VMware missed an opportunity on vRAM licensing for private cloud implementations: The new scheme requires licenses based on how much vRAM is allocated to each VM, but Khalsa wants to be licensed based on how much is actually used. Imagine 100 Windows SBS VMs, each allocated with 4GB vRAM from a pool, but the running servers typically only use 1GB in operation, said Khalsa. "If you're getting all the benefits of consolidated RAM, it'd be nice to pay only for what you were using," he said.

Other users rebel against vSphere 5 pricing
"I understand their desire to move to a charge model of pay as you go as that's one of the basic tenants of cloud computing, I'm just not convinced they got some of the parameters set appropriately," said Matt Vogt, systems administrator for Fuller Theological Seminary in Pasadena, Calif., in an email. Vogt, who also blogs about his tech work, said vCD was interesting but probably not necessary to his organization; he's not spinning up VMs all day, so he doesn't need the capabilities or the added expense.

Vogt is jazzed about vSphere 5, though, and said that he'd love to see some of the vCD features as part of the base vSphere package. "Linked clones, though, is super intriguing. This concept has been around on the SAN side for a while (we're on EqualLogic), so it's nice to see it come to the vSphere side of things, just wish it were a part of the base suite," he said.

vCD 1.5 now supports linked clones, meaning virtual machines can be quickly provisioned from a master template instead of creating full copies for each new instance. Lab Manager could do this, and from a developer's perspective, that meant very flexible experimentation and provisioning of test environments. On the storage side, as Vogt refers to, it means significant savings in infrastructure, since each clone can draw on a master template and not store redundant information for each and every VM. vCD can store clone information across multiple "virtual data centers," which are discrete networks of virtual resources.

vCloud Director’s public cloud potential
BlueLock, a hosting and managed service provider, said the most common request it gets is how to connect its environment to its customers, according BlueLock CTO Pat O'Day.

BlueLock operates one of the larger vCloud service provider environments in the U.S. But connecting users' VMware environments to Bluelock's vCloud, even in the same data center, has not been simple.

O'Day said this release will perk up fence sitters, since the vShield integration means connecting two vClouds is a matter of entering a few pieces of information instead of the messy effort it had been, requiring support personnel at both locations to coordinate and execute many steps.

vShield and vCloud now have better integration through the vCloud API, so users can effectively manage perimeters and connect virtual data centers from vCloud directly. It also now supports third party virtual networking technologies. VMware has made a vCD 1.5 technical overview available for further explanation.

"Once it becomes simple to connect to a public vCloud or two private vClouds, federation becomes much more interesting," O'Day said. He sees gradual progress on making a truly seamless, hybrid cloud out of VMware for enterprises. Right now, you either put a lot of equity into patching together a private cloud with vCloud Director or you outsource, using vCloud services from somebody like BlueLock. As the technology advances, said O'Day, it's eventually going to live up to the promise of cloud computing in the Amazon-style, only enterprises will get their familiar VMware tools and support.

He compared the technical features to how Microsoft gradually made clipboard data available to many applications. Back in the dark ages (the 1990's), cutting and pasting from a document to a spreadsheet or another application simply wasn't possible.

Then, almost overnight, it was, and that small advance in Windows was incredibly valuable to end users. Who could imagine not being able to cut and paste one app to another today? That's kind of what the linked clones means to vCloud and vSphere, said O'Day, and it's symbolic of the trend, and the promise, of cloud computing overall.

That said, he's pretty sure neither vSphere nor vCloud Director qualify as the full blown cloud utopia, at least, not yet. "You're seeing it more on the PowerPoint slide rather than in the engineering, but it's slowly becoming a cloud infrastructure," he said. vCloud Director 2.0, perhaps?

More on vCloud Director

Carl is the Senior Technology Writer for SearchCloudComputing.com.

Full disclosure: I’m a paid contributor to SearchCloudComputing.com.


Klint Finley (@Klintron) asked Poll: Did VMware Screw-Up With Its New Pricing Model? in a 7/15/2011 post to the ReadWriteCloud blog:

imageOne of the various announcements coming out of VMware this week is change to how vSphere is priced. VMware's "simplified" pricing can be found in a nightmarish 10 page white paper. Hey, no one ever said enterprise technology pricing was easy.

imageBut the problem is that VMware's new prices are much higher for some customers. Ars Technica points to this thread on VMware's community site. And CRN reports on how Microsoft is already hammering VMware on its new pricing model.

imageThe idea behind the new licensing was to move away from charging based on the specs of the physical server and charge based on the resources actually used by the virtual machines. One big change is a move towards letting customers pool virtual RAM from multiple physical servers.

The problem is that the new approach is actually much more expensive for many, perhaps most, customers who didn't have a lot of of excess RAM laying around unused by virtual machines.

Microsoft described VMware's new pricing as Moore's Law's in reverse - in some configurations, the cost per virtual machine actually goes up as you add more machines. One poster on VMware's community site wrote "You could go out and buy the physical box for way less than that today, from any hardware vendor."

VMware has always been more expensive than the competition, but that hasn't stopped it from capturing over 50% of the enterprise server virtualization market. Competing on price isn't usually a good strategy in the enterprise. But considering that Hyper-V is free [with] Windows 2008 servers, and there's a free edition of XenServer as well, has VMware finally gone too far?

Comparing vSphere to Hyper-V or Xen isn't exactly an apples to apples comparison, but Citrix has recently upped its game with its acquisition of Cloud.com, and Hyper-V will be adequate for many enterprise virtualization needs.


<Return to section navigation list>

0 comments: