Sunday, May 27, 2012

Windows Azure and Cloud Computing Posts for 5/25/2012+

A compendium of Windows Azure, Service Bus, EAI & EDI,Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image222

image433

• Updated 5/26/2012 for articles marked by Richard Astbury, Himanshu Singh, Dave Asprey, Alex Williams, Charles Babcock, Ralf Handl, Susan Malaika and Michael Pizzo.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue and Hadoop Services

The Microsoft Buisiness Intelligence Team posted Big Data, Predictive Analytics, and Self-Service BI Customer Stories on 5/25/2012:

image_thumb3_thumbIf you’ve been following the latest technology trends, you’ve probably heard quite a bit about topics such as big data, predictive analytics and self-service business intelligence (BI). Microsoft provides solutions in all these areas leveraging tools that many people are already using today such as Excel, SharePoint and SQL Server while also integrating with innovative new platforms such as Hadoop and Information Marketplaces. This integrated approach helps our customers build solutions that help drive business performance, create differentiation and stay ahead of the competition. To provide more tangible examples of these trends, we thought it would be useful to describe how a few of our customers are using some of these technologies to achieve their business goals.

Klout is an innovative company that consumes large amounts of social media raw data, turning it into valuable information that is actionable for consumers, brands and partners. They do this by storing large data-sets in Hadoop and using Microsoft Business Intelligence to power its insights. Read about their story here, or better yet hear Klout’s VP of Technology Dave Mariani give a detailed discussion on leveraging Microsoft BI at the Hadoop Summit conference next month.

Predictive analytics is used to analyze known facts to make predictions about future events. Motricity provides mobile advertising and marketing solutions for advertising agencies, consumer brands and mobile operators. Motricity needed to rapidly process repeated cycles comparing various attributes and derive correlations from consumer data. To achieve the high performance required for the data intensive task, they used a new feature in SQL Server 2012 called xVelocity which includes a memory-optimized columnstore index. This use of in-memory technology can drastically speed performance – another customer, Bank of Nagoya, used the same technology to speed up data warehousing and business intelligence queries. They were able to run a 100 million record query in just 3 seconds versus the 30 minutes previously required.

One of the key challenges for many BI projects is to provide insights to all the people in an organization, not just the skilled IT professionals, and this has driven the trend towards self-service BI solutions. Volvo Car Corporation wanted business analysts and engineers to use BI tools which would help improve efficiency and make it easier for them to collaborate. They implemented a BI solution based on Power View and related technologies, including SharePoint and Office. Employees were already familiar with Microsoft products, improving performance and reducing training costs. Productivity gains already have been impressive. They now plan to rollout Power View’s rich data visualization technology to 5,000 users, empowering the entire company to make data-driven decisions.

Data-driven decision making can be difficult if the integrity of the data is called into question. Data is often collected from various systems with unique standards, causing high variability. SQL Server 2012 Data Quality Service (DQS) and Master Data Service (MDS) are built-in technologies to address this problem. DQS is a solution that enables you to discover, build and manage knowledge about your data so that you can cleanse, match and profile it. MDS provides an authoritative data source across different applications with automated workflows, rules, hierarchies and attributes. RealtyTrac collects real estate information from across the company, and due to various data standards and the complexity of the mortgage industry they found it challenging to obtain reliable data. RealtyTrac used DQS to cleanse its 850 GB transactional database and the SQL Server 2012 MDS Add-In for Microsoft Excel to easily update its master data. As a result, they have a streamlined process for providing reliable data to its employees and customers that was cost-effective and enhances their competitive advantage.

Finally, Havas Media is a great example of a customer that’s leveraged the full power of Microsoft BI to drive operational efficiencies, improve business insight and increase its revenue opportunities. Havas manages over 100 terabytes of complex data, with over 300 million rows of data being loaded daily from multiple sources. After evaluating 20 vendor solutions, Havas chose Microsoft because its familiar tools empowered end users with the ability to drive data insights on their own. SQL Server PowerPivot and Power View data visualization technologies cut reporting time by 80 percent and allowed Havas to go from a reactive to a proactive approach with clients, generating more revenue opportunities. Read the case study or hear Tony Jaskeran (Head of BI, Havas Media) discuss details of his BI solution next month in an exclusive interview with CIO Magazine (we’ll post the URL in June!).


Manu Cohen-Yashar (@ManuKahn) asked Does a Blob Exist? in a 5/24/2012 post:

imageOne of the most common task relating to blobs is a simple check to verify if it exist. Unfortunately there is no method or property for CloudBlob that provides an answer.

I wrote a simple extension method to do that:

         public static bool Exists(this CloudBlob blob)
        {
            try
            {
                blob.FetchAttributes();
                return true;
            }
            catch (StorageClientException e)
            {
                if (e.ErrorCode == StorageErrorCode.ResourceNotFound)
                {
                    return false;
                }
                throw;
            }
        }

imageBut when I run the method I got sometimes an InvalidOperationException "BlobType of the blob reference doesn't match BlobType of the blob". No one can explain this including Steve Marx but many developers have seen this happens. So I changed my code to:

        public static bool Exists(this CloudBlob blob)
        {
            try
            {
                if (blob is CloudBlockBlob)
                {
                    ((CloudBlockBlob)blob).FetchAttributes();
                    return true;
                }
                if (blob is CloudPageBlob)
                {
                    ((CloudPageBlob)blob).FetchAttributes();
                    return true;
                }
                try
                {
                    (blob).FetchAttributes();
                    return true;
                }
                catch (InvalidOperationException ex)
                {
                    if (ex.Message == "BlobType of the blob reference doesn't match BlobType of the blob")
                        return true;
                    throw;
                }
                
            }
            catch (StorageClientException e)
            {
                if (e.ErrorCode == StorageErrorCode.ResourceNotFound)
                {
                    return false;
                }
                throw;
            }
        }

Ariel Dan (@ariel_dan) posted Cloud Storage Encryption and Healthcare Information Security to the Porticor Cloud Security blog on 5/24/2012:

imageHealthcare data security has been around for a long time, but as cloud computing gains more and more traction, healthcare providers as well as healthcare software vendors, would like to use the cloud advantages and migrate healthcare data, or run healthcare software from a cloud infrastructure. In this blog I’ll focus on specific cloud computing healthcare security concerns and how cloud encryption can help meeting regulatory requirements.

The first step to securing healthcare data is to identify the type of healthcare information and the appropriate cloud storage for it. Visual healthcare data is mainly comprised of large media files such as x-ray, radiology, CT scans, and other types of video and imaging. Such files are often stored in distributed storage, such as Amazon Web Services S3 (Simple Storage Service), or Microsoft Azure blobs. Personally Identifiable Information (PII), such as patient records, is often stored in a relational database as structured data.

In many cases healthcare providers and healthcare software vendor are required to protect both data types, and their main challenge becomes the management of this diverse data environment in a cost effective and management friendly manner. As mentioned in one of my previous articles, cloud encryption should be considered a fundamental first step.

But data encryption is only one part of the equation. The most challenging issue healthcare ISVs’ and providers are facing is the issue of the encryption keys, and how to effectively and securely manage encryption keys in the cloud without sacrificing patients’ trust and meet regulatory compliance. Current key management solutions are often limited and do not provide an answer to the most important question: “who can access to patients’ data?” Or in other words – “who’s managing the encryption keys?” Existing key management solutions will either let you, the healthcare provider, manage encryption keys for your users in the cloud, or install (yet another) physical key management server back in your datacenter. Unfortunately, both of these approaches leave the encryption keys – and therefore patients’ data – in the hands of the ISV or the provider. The latter approach also reintroduces a physical data center into the equation, and so eliminates many of the cloud benefits. In our opinion, cloud key management is one of the biggest stumbling blocks standing between healthcare providers and taking advantage of the cloud.

Best practice for an effective and secure cloud key management is split-key encryption. Split key is a patent pending and innovative technology designed for key management in the cloud. It allows healthcare providers for the first time to manage encryption keys in the cloud, yet at the same time to split the encryption key, so customers (for example a hospital using medical applications hosted in the cloud) are the only ones who control their “half key”, and therefore patient data is never visible to the cloud provider, or healthcare software vendor. (For further reading about Porticor’s split-key technology click here).


<Return to section navigation list>

SQL Azure Database, Federations and Reporting

Trent Swanson (@trentmswanson) wrote an ISV Guest Post Series: Full Scale 180 Tackles Database Scaling with Windows Azure and the Windows Azure Team published it on 5/24/2012:

Editor’s Note: Today’s post, written by Full Scale 180 Principal Trent Swanson, describes how the company uses Windows Azure and database partitioning to build scalable solutions for its customers.

imageFull Scale 180 is a Redmond, Washington based consulting firm specializing in cloud computing solutions, providing professional services from architecture advisory to solution delivery. The Full Scale 180 team has a reputation for delivering innovative cloud solutions on Windows Azure, for seemingly impossible problems. Full Scale 180 works with customers across a broad set of industries, and although every project is very unique, there are often a lot of common concerns and requirements spanning these solutions.

Through the course of various different projects with the customers, designing, implementing, and deploying some very cool solutions on Windows Azure, the company has been presented with some very interesting challenges. A challenge we often encounter is database scaling.

imageAs far as working with a data store is concerned, at a very high level, you need to concentrate your work in two main areas:

  • The “place” where data is stored
  • Getting the data in and out of that place in the most optimum way

The game of complexity and higher abstraction layers is an interesting one in the software development realm. You start with a means (this word is representing many different concepts here such as API, library, programming paradigm, class library, framework) for getting something done, eventually getting to an equilibrium state, either coming up with your higher level abstraction constructs, or using one from somebody else, often referred to as build/acquire decision. Like anything else, data stores follow the same pattern. When dealing with relational stores, such as SQL Azure, you need to play with the rules set by the system.

The Place Where Data is Stored

When working with SQL Azure, the physical components of the data store are no longer your concern, so you do not worry about things like data files, file groups, and disk-full conditions. You do need to consider resource limitations imposed by the service itself. Currently SQL Azure offers individual databases up to 150GB.

It’s common to expect an application’s database consumption to grow over time. Unlike on-premises database, the only dimension you can control is the procurement of additional database space from Windows Azure. There are two approaches for this: either plan for expansion and procure new space ahead of time (which may defeat the purpose on running in the cloud) or expand automatically as the need arises, based on policies. If choosing the latter, we then need to find a way to partition data across databases.

Optimum Data Transfer and Sharding

Aside from space management, we need to make sure the data coming in to and out of the data store is fast. With on-premises systems, both network and disk speed may be optimized, but on cloud platforms, this is typically not an available optimization, so a different approach is needed. This usually translates into parallelizing data access.

Data storage needs will grow, yet we need to play within the rules set by the platform for maximum database size. Likewise, we must learn to design solutions with these size and throughput limitations in mind. Whether it’s connectivity to the data store, the physical storage throughput within the data store, or size limits on a data store, there is often a need to design solutions to scale beyond these single unit limitations. If we can employ a mechanism where we utilize a collection of smaller databases to store our data, where we can potentially access these smaller databases in parallel, we can optimize our data store solution for both size and speed. The mechanism here should take care of automatic data partitioning and database procurement. One common approach to solve this is sharding. With sharding, there are changes to data management and data access, irrespective of the method used. SQL Azure Federations provide an out-of-the-box sharding implementation for SQL Azure.

During some of our customer engagements, we uncovered situations where SQL Azure Federations would be a solution. In addition to simply scaling out beyond the 150GB size limitation of a single database, we have found federations useful in multi-tenant cloud solutions.

SQL Azure Federations in Multi-Tenant Solutions

Multi-tenancy is a requirement that is often part of the cloud solutions we typically work on. These projects include building new solutions, adding this feature to existing single-tenant solutions, and even re-designing existing multi-tenant solutions to achieve increased scale and reduced operating costs. We often find SQL Azure Federations to be an extremely useful feature in meeting these requirements. A tenant becomes a natural boundary to partition data on, and with a large number of tenants, cost management becomes critical.

Let’s consider a solution that stores, at most, 100KB of tenant data, with each tenant’s data in its own database. The smallest database we can provision in SQL Azure today is 100MB, which equates to a monthly storage cost of $5/tenant. If we onboard 10,000 tenants, the baseline cost is now $50,000! Now, instead of separate databases, we could combine all tenants into a single database. Even if every tenant were to store their full 100KB of data, we could actually fit all 10,000 tenants in a 2GB database with room to spare, costing us only $13.99 monthly. That’s a big difference!

Now let’s consider the situation where we add new features to our service, requiring more database storage, while we continue to onboard tenants. We can certainly increase the database size to accommodate the increased demand, but at some point we hit a limit. This limit could either be a cap on the database size, or the number of transactions a single database is capable of processing in an acceptable time. This is where sharding becomes extremely useful, and with SQL Azure Federations it’s nice to know that at some point we can simply issue split our database while the service is still online, and scale our database out to meet growing demand.

We recently developed a number of samples demonstrating multi-tenant solutions on Windows Azure. One of these samples includes a multi-tenant sample application utilizing SQL Azure Federations and can be found at shard.codplex.com. Let’s look at an example based on the Shard project.

Adding Multi-tenancy to an Existing Solution

Moving a single-tenant database approach to a shared database design is often a very costly endeavor. A common approach is to add a tenant identifier to each table containing tenant-specific data, and then rework the application in all the layers to include tenancy. Additionally, to support scaling beyond the resource limitations of a single database, tenants must be distributed across multiple databases. In return, the solution’s operating cost is lower, thus increasing profits or a letting a software vendor price their solution more competitively. In the past, we would essentially end up with a custom sharding solution to reduce costs and support scale. These custom solutions had complex designs providing tenant-level isolation in a single DB, handling of connection routing, and moving tenants across databases to meet growing demand.

Filtered Connections

The SQL Azure Federations filtered connections feature is extremely powerful in moving existing solutions to a shared database design. Filtered connections can be used to minimize the changes necessary in the business logic or data access layer, which are commonly required to make tenant ID part of all operations. Once our database connection is initialized with the tenant context we can use the connection with the pre-existing business logic and data access layer. Although this feature has been used to minimize the amount of work necessary in the application, changes in the schema are still necessary, small changes in the data layer are also required and sometimes changes to the application may be necessary due to the use of unsupported features in federations. Details of the SQL Azure Federations Guidelines and Limitations can be found on MSDN.

Even though we would add a [TenantId] column to the schema in order to store data for multiple tenants in the same table, we don’t necessarily have to change our code or our model to handle this. For example, let’s say we have a table containing tasks, and some feature in the application that inserts and displays the list of tasks in that table for a tenant. After adding the TenantId column to the table, without filtered connections, any code containing SQL statements like the following

SELECT * FROM [Tasks]

Would need to be changed to something like:

SELECT * FROM [Tasks] WHERE TenantId = @TenantId

In fact, pretty much all code containing SQL statements like this would require changes. With filtered connections, application code using a statement like “SELECT * FROM [Tasks]” will not need to be changed.

Schema Changes

After a quick review to identify the use of unsupported features in the schema and the various workarounds, we start by identifying all the federated tables. Any table containing tenant-specific data will require a tenant id column on the table, which is used to partition our data on. In addition to that, any table that contains a foreign key constraint, which references a federated table, will also need TenantId added and also become a federated table. For example, imagine if we had an Orders table, which we decided to make a federated table. This table would have OrderId, and quite often an OrderDetails table, which would contain a foreign key constraint to the OrderId on the Orders table. OrderDetails would also need TenantId column added and the foreign key constraint would also need to include TenantId.

For each of these federated tables we would also default the tenant id to the value used in establishing the filtered connection context, so that when inserting new records the business logic or the data access layer isn’t required to pass the tenant id as part of the operation.

A [TenantId] column is added to all tables containing tenant-specific data. This column is defaulted to the federation filter value which is the value passed in the USE FEDERATION statement and part of the connection state on filtered connections. This feature allows us to INSERT data into the federated tables without having to include the [TenantId] as part of the statement. Now data access code that currently does not deal with tenancy would not need to be changed to support this new column when inserting new records. All unique and clustered indexes on a federated table must include the federated column, so we have also made this part of the primary key. The “FEDERATED ON” clause is added to make it a federated table and in this we associate the [TenantId] table column with the federation distribution key of [Tid].

Connection Context

Now that our schema has been updated, we need to address getting filtered connections in the application. The challenge here is that our database connections need to be initialized with the tenant context, which requires calling a specific federation statement (“USE FEDERATION…”), after the connection is opened and before the application uses this connection. This can be accomplished by implementing a method that takes a tenant identifier and either returns an open connection for the application to use, or a connection object with some event handler to handle this logic when the connection is opened.

Bringing it All Together

Let’s bring this all together and walk through the complete process for a simple web request on a typical multi-tenant solution. In this example we will consider how we write a new task to the federated task table and return the list of tasks for the tenant.

1) We receive a web request with our task information, the data is validated, and the tenant context for the request is established. How we resolve the tenant identifier is for another discussion, as this is something that can be handled in number of different ways. We pass the tenant identifier in to a method to retrieve a database connection initialized with the tenant context.

  1. This can either return an open connection that has had the “USE FEDERATION TenantFederation(Tid=137) WITH RESET, FILTERING=ON” statement executed.
  2. We can attach an event handler to the connection object to execute this statement when the connection state is changed to open
  3. There are a number of approaches available if utilizing entity framework; such as wrapping the existing SQL provider, attaching and event handler to the connection object, or simply returning a context with the connection open and initialized.

2) The “USE FEDERATION” statement redirects the connection to the correct federation member containing data for tenant id 137. The application can then use this filtered connection exactly how it did when the database contained only one tenant’s data
INSERT INTO [Task] ([Name], …) VALUES (‘My Task’, …)

  1. Note that there is no need to include TenantId value

3) Retrieve tasks to return with view – SELECT * FROM [Tasks]

  1. Note that there is no need to include WHERE clause with TenantId

As our system grows and we onboard more tenants, we now have an architecture that allows us to dynamically scale the database out. We SPLIT the federation, and while the application is still online we have scaled our solution out across another database.

SQL Azure Federations in Place of Custom Sharding

Some of our customers have already implemented a custom sharding solution. If it’s working, it may seem like we shouldn’t bother changing the solution to utilize SQL Azure Federations. We still discuss SQL Azure Federations with them, as there are benefits gained through Federations:

  • Tenant migration. It’s sometimes difficult to predict which tenants are going to be small, medium, or large during the on-boarding process, making it difficult to deal with the changing resource needs of these tenants. Tenants may need to be moved to its own database, or an existing database may need to be split to handle the increased demand on the system. SQL Azure Federations support online database splitting.
  • Tenant-agnostic queries. With custom shards, the data access layer likely includes the tenant ID in its queries. With SQL Azure Federations, a connection filter provides tenant-level filtering, allowing queries to be written without the need to include tenant ID.
  • Database lookup. Typically, in a multi-tenant application, a master database provides a lookup index for all tenant databases (mapping tenants to either a shared database or individual databases, depending on the application’s database topology). With SQL Azure Federations, the tenant-level connection string automatically connects to the appropriate database, obviating the need for managing a master lookup database with connection strings.
  • Connection Pool Fragmentation. A custom sharding implementation will utilize multiple databases, hence multiple connections and connection strings to those databases. Each of those connections will result in a connection pool in the application server, often leading to issues with pool fragmentation in the application. Depending on the number of the databases required to support the solution this can lead to performance issues and sometimes the only option is a complex redesign or needing to disable connection pooling. This is not the case with SQL Azure Federations as the connection to federations are handled much differently, resulting in a single connection pool.
Summary

SQL Azure Federations should be considered and evaluated with any solution with the requirement to dynamically scale out a relational database in Windows Azure. It should definitely be considered with any multi-tenant solution, new or existing. For more information on SQL Azure Federations I would recommend starting with some of the following resources.


<Return to section navigation list>

MarketPlace DataMarket, Social Analytics, Big Data and OData

Ralf Handl, Susan Malaika and Michael Pizzo submitted OData Extension for JSON Data: A Directional White Paper to the OASIS OData TC on 5/18/2012 (Missed when submitted.) From the introduction:

Introduction

imageThis paper documents some use cases, initial requirements, examples and design principles for an OData extension for JSON data. It is non-normative and is intended to seed discussion in the OASIS OData TC for the development of an OASIS standard OData extension defining retrieval and manipulation of properties representing JSON documents in OData.

JSON [1] has achieved widespread adoption as a result of its use as a data structure in JavaScript, a language that was first introduced as the page scripting language for Netscape Navigator in the mid 1990s. In the 21st Century, JavaScript is widely used on a variety of devices including mobiles [1], and JSON has emerged as a popular interchange format. JavaScript JSON was standardized in ECMAScript [2]. The JSON Data Interchange Format is described in IETF RFC 4627 [3].

JSON documents were initially stored in databases as character strings, character large objects (CLOBs), or shredded into numerous rows in several related tables. Following in the steps of XML, databases now have emerged with native support for JSON documents such PostGres [4], CouchDB [5], and MongoDB [6]. JSON databases are an important category of Document Databases [7] in NoSQL [8]. One of the main cited attractions of JSON databases is schema-less processing, where developers do not need to consult database administrators when data structures change.

Common use cases for JSON databases include:

  • Logging the exchanged JSON for audit purposes
  • Examining and querying stored JSON
  • Updating stored JSON
  • Altering subsequent user experiences in accordance with what was learnt from user exchanges from the stored JSON

Just as the SQL query language was extended to support XML via SQL/XML[9], query languages such as XQuery are evolving to explore support for JSON, e.g., XQilla [10] and JSONiq [11]. XML databases such as MarkLogic [12] offer JSON support.

Note that for document constructs such as XML and JSON, temporal considerations, such as versioning, typically occur at the granularity of the whole document. Concrete examples include versions of an insurance policy, contract, and mortgage or of a user interface.

JSON properties are not currently supported in OData. We suggest that an OData
extension be defined to add this support. Properties that contain JSON documents will be identified as such, and additional operations will be made available on such properties.

Status

Version 1.0 (May 18, 2012)

Authors

Ralf Handl, SAP
Susan Malaika, IBM
Michael Pizzo, Microsoft


Sean Michael Kerner (@TechJournalist) asserted “Open Data protocol is headed toward OASIS standardization and it could simplify the way Web data queries and updates occur” in a deck for his OData Protocol Close to Becoming an Open Standard article of 5/25/2012 for DevX:

imageThe Open Data Protocol (OData), which Microsoft today uses to query and update Web data, could soon find much broader use as it heads toward standardization at the Organization for the Advancement of Structured Information Standards (OASIS).

imageOne of the groups that is backing the OASIS standardization of OData is open source middleware vendor WS02. While WS02 is not currently integrating OData into its open source middleware, CTO Paul Freemantle sees the promise in the protocol for extending existing capabilities.

imageFreemantle explained that WS02 today uses the Atom Publishing Protocol AtomPub), which he says lets developers get and update data in a very RESTful way. OData extends AtomPub in a standardized way with some query capabilities.

"The thing that you also get from OData on top of AtomPub are query capabilities. So, a standardized URL syntax that, for example, can let you restrict a query to certain things," Freemantle said. "It also has the ability to understand a little bit better what the columns and values are in data."

Another benefit of OData is that it has both an XML and a JSON binding that is done in a RESTful manner.

"OData has the right balance of power and simplicity and I expect to see a lot of uses come out of it very quickly," Freemantle said.

He expects that OData will gain acceptance quickly for basic use cases where a developer just browses to find a query that can then be embedded in a mobile app.

The fact that OData is now headed to OASIS is important to Freemantle as it means the protocol will have an official stamp of standardization behind it. He noted that OASIS has a very clear and open process that is very straightforward.

"OASIS has a process where a standard can get registered as an international standard that is accepted by governments," Freemantle said. "From the point of view of getting it used widely, I see OData as being very powerful for governments that are trying to make data more open to their citizens."

OData and Microsoft Openness

Microsoft and the open source community have not always been the best of friends. When it comes to OData, Freemantle's view is that Microsoft deserves some credit.

"One of the things I think they've done well is that they set up an open community website for OData," Freemantle said. "There was a time when all the big vendors would just huddle in secret and then talk to the W3C or OASIS and say, this is what we want and then rubber stamp that as a standard."

In contrast, Freemantle noted that what Microsoft has done with OData is a very open process and they really tried to create a community around it as well.

"It's very different than the sort of model we saw five years ago," Freemantle said. "I think that it's nice to see change in the industry."

The fact that Microsoft is taking OData to a standards body is also key to WSO2 future integration with the technology. Freemantle said that there is nothing missing from the OData specification as it currently stands.

"We felt it would be nice for OData to be an OASIS standard first before we jumped in," Freemantle said. "It is part of Microsoft's open specification promise but once it's in OASIS, it's even more open."

 


The SQL Server Team (@SQLServer) reported Microsoft, SAP, IBM, Citrix, Progress Software and WSO2 co-submit OData to OASIS for standardization on 5/24/2012:

imageLast November, at PASS 2011, Ted Kummert, CVP, SQL Server laid out Microsoft’s data platform vision for the new world of data that we all live in – one that is characterized by an tremendous growth in the volume, variety and velocity of data that we need to deal with.

imageIn his keynote, Ted pointed out that a key requirement of the data platform for the new world of data is that it must allow customers to seamlessly connect to the world’s data – whether it be social sentiment across multiple social networks, or stock performance data from across the world’s stock exchanges, or GDP data for developing countries. Customers need to have the ability to combine such external data with internal data coming from systems and applications they own, to answer new questions and drive new insights and actions.

A key capability needed to enable this vision is the support for application-agnostic open protocols to expose and consume data from diverse sources. Over the past three years, Microsoft has helped champion OData, a REST-based open data access protocol, via an open process on the public OData site (www.odata.org). Many components of Microsoft’s data platform, SQL Server 2012, SharePoint, Excel and Windows Azure Marketplace already support OData. During this time, OData has enjoyed rapid adoption externally as well, with a strong ecosystem of OData producers, consumers and libraries - many of them open source - including Java, PHP, Drupal, Joomla, Node.js, MySQL, iOS and Android. Other examples of ISV adoption include SAP NetWeaver Gateway technology that exposes SAP Business Suite software to clients on diverse platforms through OData and the IBM WebSphere eXtreme Scale REST data service, which also supports OData.

Based on the level of interest and scale of adoption of OData, we are happy to announce that Citrix, IBM, Microsoft, Progress Software, SAP and WSO2 are jointly submitting a proposal to standardize OData formally via OASIS, an international open standards consortium. This will enable a broader set of developers and interested parties to influence the development of the standard in a formal manner, as well as drive broader adoption of OData.

We encourage you to find more details of this announcement here and learn more about OData itself and how it can help you unlock data silos at http://www.odata.org.

image_thumb15_thumbNo significant articles today.


The Datanami Staff (@datanami) reported Yahoo's Genome Brings Data as a Service in a 5/23/2012 post:

imageWhen one thinks about companies with big data at their core, Yahoo might come to mind as an afterthought, even though the company has been dabbling in ways to wrangle massive web data since its inception at Stanford in the mid-1990s.

While not necessarily a “big data” vendor (at least until this week) the company has been instrumental in pioneering work on Hadoop along with other notable projects in web and search mining and machine learning.

This week Yahoo announced it would be turning some of its research efforts outward with the intention of showing it's capable of competing with established analytics platform providers, specifically in the lucrative online advertising market. The added benefit to the service is that it frees companies from having to buy their own infrastructure and experts to man the analytics operations.

While the new “Genome” service is targeted at customers that are finding new ways to target advertising down to the ultra-granular user level, there are a few notable elements that are worth pointing out, especially as they can apply to businesses that are still in search of a reliable, tested and scalable platform for big ad analytics.

The data as a service offering will let advertisers comb through Yahoo's own terabytes of data across its own networks and those of its partners to let advetisers mash their data together with that of Yahoo and company's in real-time.

As Jaikumar Vijayan describes, Genome is based on technology from interclick, a company that Yahoo acquired last December. At its core is a 20-terabyte in-memory database that pulls in and analyses real-time behavioral and advertising-related data from Yahoo's multi-petabyte scale Hadoop clusters. The company is using a blend of proprietary technology and best-of-breed commercial products from vendors such as Netezza and Microstrategy to do the data analytics on the real-time data

Yahoo is not the first web giant on the block to create a service that lets users mash their own data with that of a large web services provider in a cloud/data-as-a-service model. Google's BigQuery, for instance, which launched a couple of weeks ago allows users to do approximately the same thing—but with Google data, which is arguably more prolific.

Other companies that offer similar data-as-a-service offerings that let users mash their data in with that of other large-scale sources include Metamarkets, which also is a major player in the quick-time online advertising market.


<Return to section navigation list>

Windows Azure Service Bus, Active Directory and Workflow

imageNo significant articles today.


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

• Richard Astbury (@richorama) posted Introducing the Azure Plugin Library on 5/21/2012 (Missed when posted):

tl;dr

imageAn open source library of plugins installed with a command line tool (a package manager). Once a plugin is installed, it can be easily packaged with an Azure deployment to install application dependencies on a Web or Worker role.

Background

imageOne of the key strengths of Windows Azure, is the Platform as a Service offering. Why would you want to patch an operating system, manage the deployment of your application, and check the health of your infrastructure? It’s better left to someone else (Microsoft) so you can focus on your application. However, the pitfall is when your application depends on something extra being installed or configured on the machine.

There are a few ways for installing 3rd party components on an Azure instance. This blog post has a good summary of the options.

In summary, start-up tasks are the best mechanism available for installing dependencies, which is fine for something straight forward, but for more complicated components (like MongoDB for example) there is quite a bit of work involved in scripting out the installation. Projects like AzureRunMe help with this, but ideally you want something that just works, without you having to write a lot of script.

Azure Plugin Library

The Azure Plugin Library exploits an undocumented feature of the Azure SDK, whereby modules referenced in the Service Definition file are bundled with your application in a package, which is uploaded and deployed to the Azure instances. The SDK uses this mechanism to set up Remote Desktop, Connect, WebDeploy and Diagnostics, however, additional plugins can be added by copying the files to the “Windows Azure SDK\v1.6\bin\plugins” folder.

The Azure Plugin Library offers a range of additional plugins which you can download, and include with your Azure deployment. The library is hosted on GitHub, and is open source (accepting contributions).

http://richorama.github.com/AzurePluginLibrary/

Installing a plugin using APM

The AzurePluginManager (APM) is a command line utility to discover, install, update and remove plugins:

apm list                   (displays a list of plugins available in the library)
apm installed              (displays a list of installed plugins)
apm install [PluginName]   (installs the specified plugin)
apm remove [PluginName]    (removes the specified plugin)
apm update [PluginName]    (updated the specified plugin)
apm update                 (updates all plugins)

Download options for APM.

What plugins are available?

At launch only a few plugins are available but this list is set to grow. Community contributions will be accepted, so please fork the repository and issue a pull request with your own ideas.

How do I include a plugin in my Azure package?

Installed plugins will be included in your Azure package if you add them as an Import in your ServiceDefinition.csdef file:

<ServiceDefinition>
  <WorkerRole>
    <Imports>
      <Import moduleName="[PluginName]" />
    </Imports>
  </WorkerRole>
</ServiceDefinition>
How do I add my own plugin to the library?

The library has some instructions on how to do this.


Bruce Kyle posted Visual Studio Team Announces Roadmap of Products, Pricing, Features to the US ISV Evangelism blog on 5/26/2012:

imageThe final product lineup and specifications for the next release of Visual Studio has been announced on the Visual Studio blog. The announcement was made in the posting A look ahead at the Visual Studio 11 product lineup and platform support.

Visual StudioThe tooling offered in Visual Studio 11 Express for Windows Phone is being added to Visual Studio.

Get insight on recent and future pricing and licensing changes.

The Visual Studio 11 default target for managed applications for Windows Vista or higher, is .NET Framework 4.5 or the VC11 tooling for native apps. Developers can use the IDE's multi-targeting support to run managed applications on Windows XP and Windows Server 2003 with .NET 4 and earlier versions of the framework.

Visual Studio Express

The Visual Studio 11 Express products make it easier than ever for developers to take advantage of the app development opportunities more quickly than ever before. Whether you are developing for Windows 8 or the web, Visual Studio 11 Express has the tools you need to turn your dreams into reality.

  • Visual Studio 11 Express for Windows 8
  • Visual Studio 11 Express for Web
  • Visual Studio 11 Team Foundation Server Express
Visual Studio Products

Visual Studio editions will be similar to the current line up.

No matter the size of your team, or the complexity of your project, Visual Studio 11—supported by Team Foundation Server—can help turn your ideas into software.

  • Visual Studio 11 Ultimate
  • Visual Studio 11 Premium
  • Visual Studio 11 Professional
  • Visual Studio 11 Test Professional
  • Visual Studio 11 Team Foundation Server

Compare features in Visual Studio 11 products

ALM

Visual Studio 11's Application Lifecycle Management (ALM) capabilities enable this diverse value chain to integrate and operate as a unit. Teams are provided with best-in-class tools to maximize productivity by eliminating waste through reduced cycle times supported by comprehensive reporting and analytics capabilities.

Dive deeper into Visual Studio 11’s robust ALM solutions

Lightswitch

LightSwitch, which launched last year as an out-of-band release, is now officially part of the Visual Studio 11 core product family. See What’s New with LightSwitch in Visual Studio 11? (Beth Massi).

Special Offers

See There’s no better time to seize the future for special offers. If you already have Visual Studio, you can save up to 35%. Also 20% savings on Visual Studio Test Professional.

For More Information

Nathan Totten (@ntotten) and Nick Harris (@cloudnick) produced CloudCover Episode 81 - Windows Azure Media Services on 5/25/2012:

imageJoin Nate and Nick each week as they cover Windows Azure. You can follow and interact with the show at @CloudCoverShow.

In this episode, we are joined by Alex Zambelli — Senior Technical Evangelist — and Samuel Nq — Senior Development Lead — who discuss Windows Azure Media Services. Alex and Samual go into detail about the features and services that are coming very soon in the preview for Windows Azure Media Services. We also discuss a variety of scenarios that this service will support as well as several real-world examples.

In the News:

Meet Windows Azure. Get a sneak peek at the latest from Windows Azure. You're invited to attend a special online event on June 7th streaming live from San Francisco (at 1:00pm PDT / UTC-7 hours). Visit register.meetwindowsazure.com by May 27 to add event information to your calendar and learn more about how you can win the chance to attend the event in-person.

Learn Windows Azure. Watch LIVE online on Monday, June 11th as we broadcast from TechEd 2012 in Orlando. LEARN how to use the latest Windows Azure features and services to build, deploy, and manage applications in the cloud with sessions delivered by Microsoft technical leaders Scott Guthrie, Mark Russinovich, Quentin Clark, and Bill Staples! Join us in person at TechEd or register to watch online.

Windows Azure DevCamps. Windows Azure Developer Camps are free, fun, no-fluff events for developers, by developers. You learn from experts in a low-key, interactive way and then get hands-on time to apply what you've learned. Register for an event close to you today!

Build the next great app at AngelHack – learn more on the Windows Azure blog. Prizes include $25,000 in seed capital and free Windows Azure for 12 months.

In the Tip of the Week, we discuss a blog post by Ranjith Ramakrishnan of Opstera. Ranjith give five tips to optimize your application and deployments to reduce your Windows Azure bill.

Follow @CloudCoverShow
Follow @cloudnick
Follow @ntotten


Ryan Dunn (@dunnry) described Interpreting Diagnostics Data and Making Adjustments in a 5/25/2012 post:

imageAt this point in our diagnostics saga, we have our instances busily pumping out the data we need to manage and monitor our services. However, it is simply putting the raw data in our storage account(s). What we really want to do is query and analyze that data to figure out what is happening.

The Basics

imageHere I am going to show you the basic code for querying your data. For this, I am going to be using LINQPad. It is a tool that is invaluable for ad hoc querying and prototyping. You can cut & paste the following script (hit F4 and add references and namespaces for Microsoft.WindowsAzure.StorageClient.dll and System.Data.Service.Client.dll as well).

void Main() { var connectionString
= "DefaultEndpointsProtocol=https;AccountName=youraccount;AccountKey=yourkey";
var account = CloudStorageAccount.Parse(connectionString); var client = account.CreateCloudTableClient();
var ctx = client.GetDataServiceContext(); var deploymentId = new Guid("25d676fb-f031-42b4-aae1-039191156d1a").ToString("N").Dump();
var q = ctx.CreateQuery<PerfCounter>("WADPerformanceCountersTable")
.Where(f => f.RowKey.CompareTo(deploymentId) > 0 && f.RowKey.CompareTo(deploymentId
+ "__|") < 0) .Where(f => f.PartitionKey.CompareTo(DateTime.Now.AddHours(-2).GetTicks())
> 0) //.Take(1) .AsTableServiceQuery() .Dump(); //(q
as DataServiceQuery<Foo>).RequestUri.AbsoluteUri.Dump();  //(q
as CloudTableQuery<Foo>).Expression.Dump(); } static class Funcs
{ public static string GetTicks(this DateTime
dt) { return dt.Ticks.ToString("d19");
} } [System.Data.Services.Common.DataServiceKey("PartitionKey", "RowKey")] class PerfCounter
{ public string PartitionKey {
get; set; } public string RowKey
{ get; set; } public DateTime Timestamp { get; set; } public long EventTickCount
{ get; set; } public string Role
{ get; set; } public string DeploymentId
{ get; set; } public string RoleInstance
{ get; set; } public string CounterName
{ get; set; } public string CounterValue
{ get; set; } public int Level
{ get; set; } public int EventId
{ get; set; } public string Message
{ get; set; } }

What I have done here is setup a simple script that allows me to query the table storage location for performance counters. There are two big (and 1 little) things to note here:

  1. Notice how I am filtering down to the deployment ID (also called Private ID) of the deployment I am interested in seeing. If you use same storage account for multiple deployments, this is critical.
  2. Also, see how I have properly formatted the DateTime such that I can select a time range from the Partition Key appropriated. In this example, I am retrieving the last 2 hours of data for all roles in the selected deployment.
  3. I have also commented out some useful checks you can use to test your filters. If you uncomment the DataServiceQuery<T> line, you also should comment out the .AsTableServiceQuery() line.
Using the Data

If you haven't set absurd sample rates, you might actually get this data back in a reasonable time. If you have lots of performance counters to monitor and/or you have high sample rates, be prepared to sit and wait for awhile. Each tick is a single row in table storage. You can return 1000 rows in a single IO operation. It can take a very long time if you ask for large time ranges or have lots of data.

Once you have the query returned, you can actually export it into Excel using LINQPad and go about setting up graphs and pivot tables, etc. This is all very doable, but also tedious. I would not recommend this for long term management, but rather some simple point in time reporting perhaps.

For AzureOps.com, we went a bit further. We collect the raw data, compress, and index it for highly efficient searches by time. We also scale the data for the time range, otherwise you can have a very hard time graphing 20,000 data points. This makes it very easy to view both recent data (e.g. last few hours) as well as data over months. The value of the longer term data cannot be overstated.

Anyone that really wants to know what their service has been doing will likely need to invest in monitoring tools or services (e.g. AzureOps.com). It is simply impractical to pull more than a few hours of data by querying the WADPeformanceCountersTable directly. It is way too slow and way too much data for longer term analysis.

The Importance of Long Running Data

For lots of operations, you can just look at the last 2 hours of your data and see how your service has been doing. We put that view as the default view you see when charting your performance counters in AzureOps.com. However, you really should back out the data from time to time and observe larger trends. Here is an example:

image

This is actual data we had last year during our early development phase of the backend engine that processes all the data. This is the Average CPU over 8 hours and it doesn't look too bad. We really can't infer anything from this graph other than we are using about 15-35% of our CPU most of the time.

However, if we back that data out a bit.:

image

This picture tells a whole different story. We realized that we were slowly doing more and more work with our CPU that did not correlate with the load. This was not a sudden shift that happened in a few hours. This was manifesting itself over weeks. Very slow, for the same amount of operations, we were using more CPU. A quick check on memory told us that we were also chewing up more memory:

image

We eventually figured out the issue and fixed it (serialization issue, btw) - can you tell where?

image

Eventually, we determined what our threshold CPU usage should be under certain loads by observing long term trends. Now, we know that if our CPU spikes above 45% for more than 10 mins, it means something is amiss. We now alert ourselves when we detect high CPU usage:

image

Similarly, we do this for many other counters as well. There is no magic threshold to choose, but if you have enough data you will be able to easily pick out the threshold values for counters in your own application.

In the next post, I will talk about how we pull this data together analyzers, notifications, and automatically scale to meet demand.

Shameless plug: Interesting in getting your own data from Windows Azure and monitoring, alerting, and scaling? Try AzureOps.com for free!


Steve Plank (@plankytronixx) described a Video: The most common issues support sees for Windows Azure on 5/25/2012:

Markus McCormick, an escalation engineer working in Microsoft support in the UK talks about the most common support issues they see coming in to the support centre. He gives advice on how to avoid these pitfalls.

If you are new to Windows Azure – don’t deploy any service in to production without watching this video first!


Adron Hall (@adron) posted #nodejs, why I’m basically porting EVERYTHING to it… on 5/25/2012:

imageHere’s my list of why I’m moving everything to Node.js that I run (i.e. maybe even THIS wordpress blog eventually).

  • Between the enhancements Google gave to JavaScript and the ease of use in writing with the language, it provides the least resistance of any framework and language stack out there.
  • imageNode.js + Express.js or Bricks.js + Jade + Every DB Choice on Earth really is a convincing reason too.
  • The developer community, if it isn’t now, is ridiculously close to the biggest development community on Earth. The JavaScript community goes into every corner of development too and crosses over easily into Ruby on Rails, .NET, and Java. Nobody is left untouched by JavaScript. This provides more avenues of joining up for projects than any other platform in existence.
  • Hiring for startups, mid- or enterprise business to build node.js and Javascript apps is 10x easier than hiring for anyone else right now. Which still makes it almost impossible. But that means it is “almost impossible” vs. “impossible” as it is with the other stacks!
  • Node.js is fast enough, and easy enough to distribute and less resource intensive than almost anything on the market. All that and it requires almost nothing to configure and setup compared to Apache, IIS, and a bunch of those other solutions.
  • imageThe PaaS Solutions out there, thanks in large part to Nodejitsu, Nodester, and even the Windows Azure Node.js team have made deploying Node.js apps the easiest stack to deploy around – hands down – no contest.
  • Node.js uses JavaScript (if you haven’t noticed) and most of the NoSQL solutions already speak the common language, JSON or BSON which makes layering more transparent in your architectures.
  • I could go on… but you get the idea. Node.js + JavaScript makes life EASIER and gives me more time to do other things. Other stacks typically have me tweaking and tinkering (which I do find fun) for hours more at a time per project than a Node.js Project. Generally though, I like to get a beer at some point, and Node.js get me there earlier!

Sure, I’m sticking to being polyglot. I’m even headed to the Polyglot Conference in Vancouver BC this evening. But nothing is as approachable as Node.js + JavaScript. Looking forward to a lot of PaaS discussion around Node.js and getting interoperability against .NET, Java, Rails and other frameworks.

So expect to see a lot more Node.js and JavaScript bits on the blog!


Tomasz Janczuk (@tjanczuk) recommended Develop on Mac, host on GitHub, and deploy to Windows Azure in a 5/24/2012 post:

imageIf you are like most node.js developers, you develop your code on a Mac and host it on GitHub. With git-azure you can now deploy that code to Windows Azure without ever leaving your development environment.

What is git-azure?

imageGit-azure is a tool and runtime environment that allows deploying multiple node.js applications in seconds into Windows Azure Worker Role from MacOS using Git. Git-azure consists of three components: a git-azure runtime, a command line tool integrated into git toolchain, and your own Git repository (likely on GitHub).

The git-azure runtime is a pre-packged Windows Azure service that runs HTTP and WebSocket reverse proxy on an instance of Windows Azure Worker Role. The proxy is associated with your Git repository that contains one or more node.js applications. Incoming HTTP and WebSocket requests are routed to individual applications following a set of convention based or explicit routing rules.

The git-azure command line tool is an extension of the git toolchain and accessible with the git azure command. It allows you to perform a one-time deployment of git-azure runtime associated with your Git repository to Windows Azure, after which adding, modifying, and configurting applications is performend with regular git push commands and take seconds. The git azure tool also helps with scaffolding appliations, configuring routing and SSL, and access to Windows Azure Blob Storage.

Getting started with git-azure

For detailed and up to date walkthrough of using git-azure see the project site. High level, this is how you get started with git-azure:

First, install the git-azure tool:

sudo npm install git-azure -g

Then download your *.publishSettings file for your Windows Azure account from https://windows.azure.com/download/publishprofile.aspx, go to the root of your Git repository, and call:

git config azure.publishSettings <path_to_your_publishSettings_file>
git azure init --serviceName <your_git_azure_service_name>

The git-azure tool will now provision your Windows Azure hosted service with git-azure runtime associated with your Git repository. This one-time process takes several minutes, after which you can add, remove, and modify applications in seconds, as long as you configure your Git repository with a post-receive hook following the instructions git azure init provides on successful completion.

Here is a screenshot of the terminal running the initialization process triggered by git azure init

Host multiple apps in the same Windows Azure VM instance

To add two applications, call:

git azure app --setup hello1
git azure app --setup hello2
git add .
git commit -m "new applications"
git push

Your apps are available within seconds at

  • http://<your_git_azure_service_name>.cloudapp.net/hello1
  • http://<your_git_azure_service_name>.cloudapp.net/hello2
Advanced usage (WebSockets, SSL) and next steps

The git-azure tool and runtime come with support for URL path as well as host name routing, WebSockets, SSL for HTTP and WebSockets (including custom certificates for each host name using Server Name Identification), and full RDP access to the Windows Azure VM for diagnostics and monitoring. Going forward, I plan to add support for accessing live logs in real time from the client, SSH access to Windows Azure VM, and support for multi-instance Azure deployments.

I do take contributions. If you want to contribute, get in touch and we will go from there. Otherwise, feel free to drop opinions and suggestions by filing an issue.


Avkash Chauhan (@avkashchauhan) warned Adding MSCHART to your Windows Azure Web Role and WCF service could cause exception in a 5/24/2012 post:

imageYou might hit an exception while sdding MSCHART to your Windows azure Web Role and WCF service could cause exception as below:

  1. Create a new Windows Azure ASP.NET Webrole
  2. Add a new WCF Web Role to it
  3. Build and verify that it does works in Compute Emulator
  4. Add MsChart control on any ASP page within your Web Role
  5. Verify that web.config is updated with mschart control specific configuration
  6. Build and run it again in Compute Emulator

You will hit an exception as below:

imageTo solve this problem you just need to add the following in your web.config inside system.webserver section:

<validation validateIntegratedModeConfiguration="false"/>

The final system.webserver looks like as below:

<system.webServer>
<modules runAllManagedModulesForAllRequests="true" />
<handlers>
<remove name="ChartImageHandler" />
<add name="ChartImageHandler" preCondition="integratedMode" verb="GET,HEAD,POST"
path="ChartImg.axd" type="System.Web.UI.DataVisualization.Charting.ChartHttpHandler, System.Web.DataVisualization, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />
</handlers>
<validation validateIntegratedModeConfiguration="false"/>
</system.webServer>

<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

Andrew Brust (@andrewbrust) asked Visual Studio LightSwitch: Will it emerge from sleeper status? in a 5/24/2012 guest post to Mary Jo Foley’s All About Microsoft blog for ZDNet:

I’m taking a couple weeks off before the busiest part of Microsoft’s 2012 kicks into full gear. But never fear: The Microsoft watching will go on while I’m gone. I’ve asked a few illustrious members of the worldwide Microsoft community to share their insights via guest posts on a variety of topics — from Windows Phone, to Hyper-V. Today’s entry is all about Visual Studio LightSwitch and is authored by Andrew Brust, who pens ZDNet Big on Data blog.

imageOnce upon a time, Microsoft had a product called Visual Basic. It allowed for rapid development of data-centric business applications and gave rise to a huge ecosystem of custom controls. Millions of programmers were attracted to VB for its low barrier to entry, as it required very little code to produce relatively capable applications. And many developers stayed because VB also permitted significant coding once developers were motivated to write it.

image_thumb1That was a winning combination and one Microsoft effectively gave up in 2002, when it publicly released .NET. VB was demoted to a mere programming language supported by a new and more vast development framework. Yes, it had better enterprise application chops than did classic VB, and so it competed well with Java. But it forfeited the low barrier to entry and high productivity that made VB so popular.

Meanwhile, platforms like WordPress came along with large ecosystems of plug-ins and the ability to host code written in PHP. In Microsoft’s zeal to win the enterprise, it lost its franchise in the “productivity programmer” category, one that it practically invented.

Welcome back, Microsoft.

All this changed last summer when Microsoft introduced Visual Studio LightSwitch, a product which gave Microsoft a low-barrier-to-entry development option that is nonetheless based on the core .NET stack. While Microsoft Access offers productivity and data-centrism, it does not generate n-tier applications based on core .NET technologies, capable of running on Azure, Microsoft’s cloud platform.

I’ve been a supporter of LightSwitch since the mere idea of it was in incubation in Redmond and I am the author of the five-part whitepaper series that Microsoft features on the LightSwitch Web site.

It’s been the better part of a year since LightSwitch’s public release last summer, and the product’s traction so far has been lackluster. Productivity programmers don’t seem to have much awareness of LightSwitch and enterprise developers have been dismissive of it. Like VB, LightSwitch supports custom controls (and several other extension types) but support from the commercial component vendors has been mixed. Needless to say, I’m disappointed by this, and I have a few ideas on how it might change.

The second version of LightSwitch is now in Beta, along with the rest of Visual Studio version 11. The first version had ability to produce Silverlight applications that run on the Windows desktop and in the browser, both on-premises and in the cloud. LightSwitch v2 will support all of that and will add the ability to expose data services — RESTful Web APIs based on Microsoft’s Open Data Protocol (OData).

The best part about these data services is they will be produced almost intrinsically. No code will be required and yet it will be easy to add code that implements sophisticated business logic. Combine this with LightSwitch’s ability to run on Windows Azure, and the product suddenly becomes a powerhouse on the server. hat moves it past filling the ten-year-old line-of-business app productivity gap. Instead LightSwitch will now be a productivity programmer’s tool for the Web and cloud data world, enabling back-end services for mobile apps and enterprise apps written in any language. I’m excited.

But more needs to be done.LightSwitch is a product that deserves to succeed. And that’s why Microsoft needs to change and improve its game as it evolves, evangelizes and markets the product.

Below is a to-do list for LightSwitch’s product and marketing teams. I offer these ideas simultaneously in support and in constructive criticism.

1. Market to productivity developers, a group that includes everyone from power users in Excel, to WordPress/Drupal/Joomla developers and even JavaScript jocks. Work with the Office team, go to non-Microsoft Web developer conferences and push content beyond MSDN (the Microsoft Developer Network) to SlideShare and YouTube. Get the analysts and press involved. Go mainstream.

2. Market differently to enterprise developers: instead of suggesting they change the way they work, show them how LightSwitch can support their work. Show them how to build data services with LightSwitch for their full-fledged .NET front-end application. Or show them how their .NET skills allow them to create extensions to the core LightSwitch product. Ease pain points for, and support, enterprise developers. Don’t burden them. Evangelize; don’t proselytize.

3. Stoke the ecosystem. Release a variety of free extensions but leave room for improvement and enhancement and solicit that from the community. Provide recognition for community influencers and make co-op marketing funds available for commercial third party extension vendors. Get a LightSwitch-focused online magazine running, add social features and gamify it. Host a virtual conference at least twice a year and provide sustained promotion of the content in between events.

4. Enhance LightSwitch’s data visualization capabilities, both through core capabilities and influencing third party extensions. LightSwitch’s data centrism makes it a great tool for this scenario, but a few more pieces need to be there to make it a sweet spot. Connect LightSwitch to major BI and data warehouse platforms. Work with the Excel and Microsoft BI teams for “better together” integration. Then target power users with market messaging around this (see point 1, above).

5. Integrate LightSwitch with Office and Office 365. There could be a ton of synergy between these products, and some extensions have already emerged that tie LightSwitch with Word, Excel and PowerPoint. But there needs to be core first-class support built right into the product.

6. Go mobile. While LightSwitch’s Silverlight applications will run in the desktop version of Internet Explorer on Intel- and AMD-based Windows 8 tablets, they won’t run on the Metro side of Windows 8 (and so will not run at all on ARM-based tablets) and they won’t run on iOS or Android devices either. Fix this, and promote the fix like crazy. Because mobile + cloud + data services is the holy trinity of software today.

LightSwitch has a lot of potential and version 2 will provide even more. But raw product capabilities are not enough. Savvy strategy and marketing are necessary to make the product successful. The market needs this product, but Microsoft needs to show the market why.


Return to section navigation list>

Windows Azure Infrastructure and DevOps

Matt Weinberger (@M_Wein) reported Microsoft Windows Azure Is So Popular, It’s Turning Customers Away in a 5/25/2012 post to the Services Angle blog:

Capacity planning is a huge headache for the IT professional, no matter the scale: Too much capacity, and you’re wasting money and resources, but too little and you’re risking a shortage. Prospective Microsoft Windows Azure cloud customers in the South Central US data center region are discovering they are on the wrong side of that equation, finding themselves without the ability to purchase compute or storage resources (Current customers are unaffected).

imageWord comes in the form of an official Microsoft Windows Azure blog entry, which explains that the team is both “excited” and “humbled” by Azure’s adoption rates. All the same, while Microsoft is expanding the platform’s capacity as quickly as it can, there are limits to what its current infrastructure can handle, and it’s hit a threshold in some regions where in order to help current customers scale, existing customers have to be left in the cold. As of today, new customers in the South Central US Azure region won’t be able to purchase compute or storage resources – though Service Bus, Caching, Access Control and SQL Azure remain available.

Existing customers deployed in that region aren’t affected by this news. On the flip side, Windows Azure recently launched two new regions, East US and West US, the former of which also just added SQL Azure cloud databases to its services lineup.

The lesson here is that there’s such a thing as too much popularity. Remember, to the CIO, employees are the customers when it comes to the cloud. It’s impossible to get your capacity demands exactly right, but when you miss the mark and have to scale down for capacity, you’re not exactly delighting anybody. At the same time, though, shutting down signups is almost definitely better than scaling out of control and causing another outage.


David Linthicum (@DavidLinthicum) asserted “Enterprises that use cloud computing resources must often retrofit management after the fact. Don't be those guys” in a decke for his on't make cloud management an afterthought article of 5/25/2012 for InfoWorld’s Cloud Computing blog:

imageIt's clear how public cloud computing is used today by larger enterprises: It's a storage system here, an API providing data there, and cloud-delivered app dev testing somewhere else. However, what's almost never apparent is how these providers should be managed to the right level of operational efficiency.

imageWhy? Because those charged with managing the internal resources typically found in data centers don't -- or refuse to -- work with public cloud providers. Thus, organizations using pubic cloud resources are either managing them catch as catch can or not at all.

The downside of this is evident: A cloud-delivered storage system goes offline, causing internal application failures, and there are no procedures or technologies in place to correct the problem in time to avoid damaging the business. Or a customer facing a cloud-based Web application is under attack, and other than hoping the customer is in a forgiving mood, you're screwed.

A sad truth is that cloud computing management is often an afterthought or not a thought at all. Why? Many cloud computing projects emerge outside the view of corporate governance and IT. Thus, there is no voice for security, governance, and management, unless those deploying the cloud-based solutions have the funding and the forethought. They almost never do.

How do you solve this problem? If you're in enterprise leadership, make sure you set policies around the management of cloud-based systems. This includes projects that haven't been on your radar thus far. This means forgiveness for what has been done to date, and it means rendering aid to those outside-of-IT efforts. Those unsanctioned and ad hoc cloud solutions are here to stay, so help their owners in finding management resources and technology.

If you're one of those people -- in IT or not -- who is building a cloud-based solution without an explicit management strategy, pause! Create that strategy before you deploy. After all, whether for a business process or a formal IT effort, you don't move forward without clearly defined management procedures and the right management tools. Why would you treat the cloud-based processes and tools any different, whether driven by the business or IT? You shouldn't.


James Staten (@staten7) asserted Cloud Inefficiency - Bad Habits Are Hard To Break in a 5/24/2012 post to his Forrester Research blog:

imageWe all have habits we would like to (and should) break such as leaving the lights on in rooms we are no longer in and good habits we want to encourage such as recycling plastic bottles and driving our cars more efficiently. We often don't because habits are hard to change and often the impact isn't immediate or all that meaningful to us. The same has long been true in IT. But keep up these bad habits in the cloud, and it will cost you - sometimes a lot.

imageAs developers, we often ask for more resources from the infrastructure & operations (I&O) teams than we really need so we don't have to go back later and ask for more - too painful and time consuming. We also often don't know how many resources our code might need, so we might as well take as much as we can get. But do we ever give it back when we learn it is more than we need?

On the other hand, I&O often isn't any better. The first rule we learned about capacity planning was that it's more expensive to underestimate resource needs and be wrong than to overestimate, and we always seem to consume more resources eventually.

Well, infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS) clouds change this equation dramatically, and you can reap big rewards if you change with them. For example, sure, you can ask for as many resources as you want - there's no pain associated with getting them and no pain to ask for more, either. But once you have them and figure how much you really need, it heavily behooves you to give back what you aren't using. Because you are paying for what you allocated whether you are really using it or not.

Cloudyn, a SaaS-based cloud cost management company, knows how much this is costing its enterprise clients, as it uses monitoring capabilities to map the difference between what its clients are paying for and what they are really using. It recently shared with Forrester the latest findings in its CloudynDex metrics report that aggregates cloud use and cost data from more than 100 of its clients running on Amazon Web Services' (AWS) IaaS cloud (collected randomly and anonymously with its clients' permission). The data is clear proof that we are bringing these bad habits to the cloud. These clients are spending between $12,000 to $2.5 million per year with AWS and throwing away about 40% of that expense. What kind of waste are they incurring?

  • Overallocation of resources. Cloudyn found that the degree of sustained utilization across the 400,000-plus instances being monitored was just 17%. The company said the common issue was allocating Large or Extra Large instances when a Small or Medium would suffice. This one's easy to find (especially with a cost analysis tool like Cloudyn) and easy to fix. Not surprisingly, Cloudyn also found that the larger the instance, the worse the utilization, with Extra Large instances averaging just 4% utilization. That's worse than the average utilization of physical servers in 2001 - before virtualization. For shame!
  • Static workloads. Cloudyn also found that many client instances were forgotten and left running but not doing anything for days, even months, at a time. Cloud vendors will certainly be happy to take your money for this. But, really. Is it really that hard to shut down an instance and restart it when needed?
  • Not using Reserved Instances. The statistics also showed the average client had a persistent use of cloud instances that would have benefited from the discounts that come with AWS Reserved Instances but that clients weren't taking advantage of these discounts, which can amount to up to 70% lower bills. This one takes longer to assess, but once you know you will be staying in the cloud for a year or more, there's no excuse not to take advantage of this. Customers using Cloudyn or similar cost-tracking tools that continuously track resource activity are quickly getting wise to this benefit. Cloudyn's data shows a big increase in adoption of Reserved Instances from January to May of 2012.

Forrester has found a number of other bad habits from cloud users, some of which were noted in our latest Forrester Cloud Playbook report "Drive Savings And Profits With Cloud Economics," such as not configuring load balancing/auto-scaling properly to turn off instances fast enough as demand declined, not leveraging caching enough between application layers or at the edge, and not optimizing packet flows from the cloud back to your data center.

It's understandable why we bring our bad habits with us to the cloud. Heck, simply by using the cloud, we're saving the company money. But don't let the optics of the cloud pull blinders over the real costs. A medium instance at $0.32 per hour sounds so cheap, but when daily consumption leads to $130,000 in annual spend, which was the average for this group of customers, then 40% savings is very, very significant.

How are you using cloud platforms? With what apps, dev tools, software? Participate in Forrester's latest survey on this topic today.

And for more on the true costs of using the cloud, read Andrew Reichman's report on cloud storage and set a Forrester.com research alert on analyst Dave Bartoletti, whose next report will provide a full breakdown of the cost of the cloud versus in-house deployments. You can learn more about your use of cloud by subscribing to Cloudyn - your cloud consumption data will be automatically correlated with others in the CloudynDex report to help you continually get a clearer picture of real cloud use. We all benefit by learning from others.


Clint Edmonson (@clinted) described Windows Azure Recipe: Enterprise LOBs in a 5/23.2012 post:

imageEnterprises are more and more dependent on their specialized internal Line of Business (LOB) applications than ever before. Naturally, the more software they leverage on-premises, the more infrastructure they need manage. It’s frequently the case that our customers simply can’t scale up their hardware purchases and operational staff as fast as internal demand for software requires. The result is that getting new or enhanced applications in the hands of business users becomes slower and more expensive every day.

imageBeing able to quickly deliver applications in a rapidly changing business environment while maintaining high standards of corporate security is a challenge that can be met right now by moving enterprise LOBs out into the cloud and leveraging Azure’s Access Control services. In fact, we’re seeing many of our customers (both large and small) see huge benefits from moving their web based business applications such as corporate help desks, expense tracking, travel portals, timesheets, and more to Windows Azure.

Drivers
  • Cost Reduction
  • Time to market
  • Security
Solution

Here’s a sketch of how many Windows Azure Enterprise LOBs are being architected and deployed:

image

Ingredients
  • Web Role – this will host the core of the application. Each web role is a virtual machine hosting an application written in ASP.NET (or optionally php, or node.js). The number of web roles can be scaled up or down as needed to handle peak and non-peak traffic loads. Many Java based applications are also being deployed to Windows Azure with a little more effort.
  • Database – every modern web application needs to store data. SQL Azure databases look and act exactly like their on-premise siblings but are fault tolerant and have data redundancy built in.
  • Access Control – this service is necessary to establish federated identity between the cloud hosted application and an enterprise’s corporate network. It works in conjunction with a secure token service (STS) that is hosted on-premises to establish the corporate user’s identity and credentials. The source code for an on-premise STS is provided in the Windows Azure training kit and merely needs to be customized for the corporate environment and published on a publicly accessible corporate web site. Once set up, corporate users see a near seamless single sign-on experience.
  • Reporting – businesses live and die by their reports and SQL Azure Reporting, based on SQL Server Reporting 2008 R2, can serve up reports with tables, charts, maps, gauges, and more. These reports can be accessed from the Windows Azure Portal, through a web browser, or directly from applications.
  • Service Bus (optional) – if deep integration with other applications and systems is needed, the service bus is the answer. It enables secure service layer communication between applications hosted behind firewalls in on-premises or partner datacenters and applications hosted inside Windows Azure. The Service Bus provides the ability to securely expose just the information and services that are necessary to create a simpler, more secure architecture than opening up a full blown VPN.
  • Data Sync (optional) – in cases where the data stored in the cloud needs to be shared internally, establishing a secure one-way or two-way data-sync connection between the on-premises and off-premises databases is a perfect option. It can be very granular, allowing us to specify exactly what tables and columns to synchronize, setup filters to sync only a subset of rows, set the conflict resolution policy for two-way sync, and specify how frequently data should be synchronized
Training Labs

These links point to online Windows Azure training labs where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.)


Windows Azure (16 labs)

Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML


SQL Azure (7 labs)

Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data.


Windows Azure Services (9 labs)

As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure.

See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

• Charles Babcock (@babcockcw) asserted “VMware-commissioned study claims advanced management capabilities make VMware less expensive than Microsoft virtualization products when you factor in admin time. But beware the red flags” in a deck for his VMware Claims It's Cheaper Than Microsoft article of 5/22/2012 for InformationWeek’s Hardware blog:

imageVMware claims its virtualization products are cheaper than Microsoft's when the total cost of ownership is calculated over a two-year period. To get to that conclusion, it sponsored a study by Principled Technologies that takes a close look at two years of administrator time and expense required to run the respective systems.

imageReaders of this report: beware. VMware got itself in trouble last July by charging for vSphere 5 Enterprise Edition based on the amount of virtual memory the customer used. It initially set a 48-GB virtual memory limit per license; three weeks later, in the face of customer feedback, it raised it to 96-GB per license. This study is part of VMware's continued response to that blowup. [Emphasis added.']

imageSetting the price based on virtual resources instead of physical resources struck directly at the budgets of some of VMware's largest customers. The companies exceeding the 48-GB limit were most likely the skilled implementers gaining the most value from VMware products. Proliferating cores per CPU were constantly raising the number of VMs that could be run on one host. VMware decided to shift the basis for pricing so that it too could ride the CPU escalator. Customers protested vigorously. Microsoft, still pricing per physical CPU, started referring to the VMware "v-tax."

imageWhat's a dominant virtualization vendor to do? Well, for one thing, it commissioned a study that shifts attention to the cost of the people required to run virtual machines in the data center.

Principled Technologies compared five administrative tasks run on each vendor's central virtualization system: vSphere 5.1 and vCenter for VMware, and System Center with Virtual Machine Manager for Microsoft. The five tasks and their conclusions were:

1. Move virtual machines to perform routine maintenance on a host server, put the server in maintenance mode, then move the VMs back. The claim: vSphere took 79% less time than System Center.

2. Add new storage volumes to a virtualized cluster using VMware Storage Distributed Resource Scheduler. System Center lacks an equivalent feature, so the testers used System Center's Virtual Machine Manager component, plus an administrator's manual decision-making. The latter included time for a System Center admin to warn users there would be a brief VM application outage. The claim: The operation took "95% less time with VMware as compared to Microsoft."

3. Some VMs engage in intensive I/O, becoming a "noisy neighbor" and interfering with the operation of neighboring VMs on the same host. Virtualization admins like to isolate such a noisy neighbor to protect other users. The vSphere admin can do this through the vCenter management console. Again, System Center has no directly equivalent feature. The claim: It takes 97% less time to do it the VMware way, since it's more automated.

4. Provisioning new hosts is a routine task accomplished by vSphere Auto Deploy and System Center's Configuration Manager 2007 R3 bare metal deployment task sequence. The claim: Executing the provisioning was "up to 78% faster" under vSphere.

5. Perform a non-disruptive disaster recovery test. VMware's Site Recovery Manager was tested against Microsoft's equivalent process. The claim: The test is 94% less time consuming under VMware.

The testers assumed these tests would be carried out several times over a two-year period by a senior systems administrator, paid $88,600 a year. The resulting savings to the customer who chose VMware would be $37,540 in management time costs. Because VMware also operates hosts with a higher concentration of VMs, the study added in capital cost savings, using VMware's Virtualization Cost Per Application calculator.

Here are a few red flags to note, in addition to VMware's sponsorship. [Emphasis added.]

First, the Virtualization Cost Per Application calculator is a black box. Wouldn't it be better to stick to comparisons where neither side is doing out-of-view calculations based on its own assumptions? To its credit, the Principled Technologies testers acknowledge that the calculator assumes "a density advantage of 50% more VMs for VMware over Microsoft." The testers scaled that back to "a more conservative estimate of 25%" and did cost estimates based on that.

Unfortunately, we still don't know whether 25% is accurate.

Second, and most notable to me is the noisy neighbor calculation. The VMware virtualized data center tends to be used by a larger company that's more intensely virtualized than the Microsoft Hyper-V user. For that reason, I suspect noisy neighbors tend to occur more frequently in the VMware environment than in typical Microsoft ones.

I'm also suspicious that an administrative function available in vSphere, for which there's no equivalent in Microsoft's environment, was twice chosen for time-sensitive testing. The automated procedures in vSphere were twice pitted against manual ones. If Microsoft is used more frequently in small and midsize business (SMBs), it's quite possible noisy neighbors occur less frequently there and will not result in the amount of lost administrative time the comparison suggests.

Third, the addition of the storage task, with time injected for the System Center admin to warn application users of an outage, also favors VMware, perhaps excessively so. Yes, it would be a lot more time consuming to proceed in that manner, but that's a good reason why the addition might occur after hours or on a weekend in a Microsoft shop. If that's not optimum, it's still the way many shops function. So I wonder what a more realistic comparison might yield on that count?

By the time you're done reading the five test results, you see through the VMware-centric point of view, and few Windows Server and System Center users are going to be convinced by it. On the other hand, if you're a VMware user, you now have a counterargument if the CFO's been pressing for a reduction in those growing VMware bills--that VMware costs you more up front but saves you administrative time after that. It's hard for VMware defenders to quantify the VMware advantage under the pressure of daily operations. The Principled Technologies study has its flaws, but it's ammunition for those who find little at hand in the pricing argument.

And therein lies the advantage of this study to VMware. It won't convert Microsoft users to more efficient VMware administrative tasks. It's a buttress for VMware's own advocates inside the enterprise, who are under pressure from the rising cost of the VMware share of the budget. At some of VMware's largest customers, the move to virtual memory pricing in July prompted a two-fold reaction: Customers decided to pressure VMware to raise the limit to where they were less affected; they also decided to convert parts of the data center to another set of products to gain leverage over VMware. Microsoft, with "free" virtualization embedded in Windows Server 2008, was an obvious candidate.

If this study had been done from the point of view of the typical Windows administrator, who must manage both physical and virtual resources, it might have reached entirely different conclusions. But, of course, displaying an understanding of System Center administration wasn't the goal. Highlighting the differences between the two, where VMware might claim advantage in a slender band of advanced virtualization characteristics, was the point.


<Return to section navigation list>

Cloud Security and Governance

• Dave Asprey (@daveasprey) reported Patriot Act Study Shows Your Data Isn’t Safe in Any Country in a 5/23/2012 post to the Trend Cloud Security blog:

imageGlobal data privacy law firm Hogan Lovells just published a white paper outlining the results of a study about governmental access to data in the cloud. The paper was written by Christopher Wolf, co-director of Hogan Lovells’ Privacy and Information Management practice, and Paris office partner Winston Maxwell. The Hogan Lovells press release is here and full white paper here.

Worldwide IT press picked up the study, including Computerworld, PC World, IT World, and IDG News. Unfortunately, the articles generally say “The US Patriot Act gives the US no special rights to data” and downplay differences between laws in the US and ten other countries.

It’s true that in most countries, if the government wants your company’s data, they have a way to get it. It’s also true that if a Western government wants your data sitting on a cloud server in another Western country, they have a way to get it.

I confirmed this last point in person with a Deputy Director from the FBI at a security conference. I asked, “What would you do if you needed to get data from a German company at a cloud provider in Germany for a US investigation? You have no rights there.” She smiled and said something like, “We would just call our colleagues in German intelligence and ask for the data. They would give it to us because we would return the favor on their next investigation.” There are also MLAT treaties in effect of course that put some legal framework around this.

The study did point out – in the fine print – that only Germany and the US have gag order provisions that prevent a cloud provider from mentioning the fact that it has disclosed the data you paid it to protect. This is the part of the Patriot act that hurts US cloud providers.

Any IT security professional would want to know if his company’s data has been accessed, regardless of whether it is lawful access from a government investigation or whether it’s a cybercriminal attact. The point is that if it’s YOUR data, anyone who wants to see it should present YOU with a lawful order to disclose the data.

For a government to ask your cloud provider to do this behind your back is underhanded, cowardly, and bad for all cloud providers worldwide. It fundamentally breaks the trusted business relationship between a cloud provider and its customers.

But at a higher level, this research proves a bigger point – that your data will be disclosed with or without your permission, and with or without your knowledge, if you’re in one of the 10 countries covered. What’s an IT professional to do?

There is only one answer, and it’s probably obvious: encryption. If your data sitting in the cloud is “locked” so only someone with keys can see it, you’re protected. If a government – or anyone else – wants to see your data, they need to ask you – lawfully – for the keys, which gives you the right to fight the request if it is indeed lawful.

The small detail that matters most here is how you handle the encryption keys. If your data is sitting right next to your keys at the same cloud provider, the cloud provider can be forced to hand over your keys and your data, and you don’t get any real protection.

On the other hand, if your data is safely encrypted at your cloud provider, and your encryption keys are on a policy-based key management server at another cloud provider, or under your own control, then your keys can only be disclosed to authorized parties, and you control who the authorized parties are.

In other words, policy based key management will protect you from potentially unlawful data requests from your own government, from other governments, and from cybercriminals.

It’s time to ask yourself why you’re not using policy based key management in the cloud, if you’re not doing it already.


Chris Hoff (@Beaker) posted Bridging the Gap Between Devs & Security – A Collaborative Suggestion… on 5/23/2012 (missed when posted):

imageAfter my keynote at Gluecon (Shit My Cloud Evangelist Says…Just Not To My CSO,) I was asked by an attendee what things he could do within his organization to repair the damage and/or mistrust between developers and security organizations in enterprises.

image_thumbHere’s what I suggested based on past experience:

  1. Reach out and have a bunch of “brown bag lunches” wherein you host-swap each week; devs and security folks present to one another with relevant, interesting or new solutions in their respective areas
  2. Pick a project that takes a yet-to-be-solved interesting business challenge that isn’t necessarily on the high priority project list and bring the dev and security teams together as if it were an actual engagement.

Option 1 starts the flow of information. Option 2 treats the project as if it were high priority but allows security and dev to work together to talk about platform choices, management, security, etc. and because it’s not mission critical, mistakes can be made and learned from…together.

For example, pick something like building a new app service that uses node.js and MongoDB and figure out how to build, deploy and secure it…as if you were going to deploy to public cloud from day one (and maybe you will.)

You’ll be amazed to see the trust it builds, especially in light of developers enrolling security in their problem and letting them participate from the start versus being the speed bump later.

10 minutes later it’ll be a DevOps love-fest


<Return to section navigation list>

Cloud Computing Events

• Himanshu Singh (@himanshuks) posted Windows Azure Community News Roundup (Edition #20) on 5/25/2012:

imageWelcome to the latest edition of our weekly roundup of the latest community-driven news, content and conversations about cloud computing and Windows Azure. Here are the highlights from this week.

Articles and Blog Posts

Upcoming Events, and User Group Meetings

North America

Europe

Other

Recent Windows Azure Forums Discussion Threads

Send us articles that you’d like us to highlight, or content of your own that you’d like to share. And let us know about any local events, groups or activities that you think we should tell the rest of the Windows Azure community about. You can use the comments section below, or talk to us on Twitter @WindowsAzure.


Michael Collier (@MichaelCollier) reported on 5/24/2012 Windows Azure Kick Start – Returning to Columbus, Ohio on 6/9/2012:

imageThe Windows Azure Kick Start tour this spring has been a fairly successful event. The tour hit 13 cities across the Midwest and Central parts of the United States. The purpose of the these kick starts was to introduce people to Windows Azure and give them the basic foundation they need to build applications on Windows Azure.

imageOn Saturday, June 9th there will be another Windows Azure Kick Start event held in Columbus, OH. If you missed the one earlier this spring (on April 5th), now is your chance!

During the day we’ll show you how sign up for Windows Azure (leveraging your MSDN benefits if applicable), how to build a basic ASP.NET application and connect it to SQL Azure, ways to leverage cloud storage, and even some common scenarios for using Windows Azure. As if that’s not enough, lunch and prizes will also be provided!

What are you waiting for? Step away from the landscaping and step into a comfy air conditioned office and learn about Windows Azure. We’ll have fun – I promise!!

Register now at http://columbuswaks.eventbrite.com/


<Return to section navigation list>

Other Cloud Computing Platforms and Services

Alex Williams (@alexwilliams) asserted Amazon Web Services Launches Export Service for VMs but Getting Application Data Out Remains Elusive in a 5/25/2012 post to the Silicon Angle blog:

imageAmazon Web Services has announced a service that allows customers to export previously imported EC2 instances back to on-premise environments. [See article below.]

That’s a shift that will send ripples throughout the market and bolsters the argument for a federated cloud that allows for the free flow of data across multiple clouds.

imageBut that is not exactly what AWS is promising in this announcement. AWS is offering the capability to export virtual machines to on-premise environments. Sure, it provides a counter to companies concerned about lock-in with AWS. But it does not address the full issue about allowing for the free flow of application data.

imageThomas-Hughes Croucher is a now a consultant who runs and owns Jet Packs for Dinosaurs, which specializes in high performance Web sites. In his previous work at Yahoo, he often spoke about the issue of data portability. He gave this presentation a few years ago, which I think outlines what AWS and other vendors really needs to do if they wishes to discard the notion that it locks-in customer data:

The Cloud’s Hidden Lock-in: Network Latency

Croucher wrote to me in an email today, referring to the presentation. He said:

..while machine portability is great -the development machines are just a small part of the overall picture. The biggest, hardest thing to move around is data, which this doesn’t really address.

Portability of VMs will make it easier to move between cloud vendors, but it doesn’t solve getting all of your (big) data out of the vender. While Amazon now support import/export via hard disk it’s still a huge problem because the vendors don’t make it affordable to migrate your application data.

The AWS service works in concert with AWS VM Import which gives the ability to import virtual machines in a variety of formats into Amazon EC2. That means customers can migrate from an on-premises virtualization infrastructure to the AWS Cloud.

Customers can initiate and manage the export with the latest version of the EC2 command line (API) tools.

The service allows for the export of Windows Server 2003 (R2) and Windows Server 2008 EC2 instances to VMware ESX-compatible VMDK, Microsoft Hyper-V VHD or Citrix Xen VHD images. AWS says it plans to support additional operating systems, image formats and virtualization platforms in the future.

Amazon Web Services Launches Export Service for VMs but Getting Application Data Out Remains Elusive is a post from: SiliconANGLE
We're now available on the Kindle! Subscribe today.

In the same vein:

Jeff Barr (@jeffbarr) described the VM Export Service For Amazon EC2 in a 5/25/2012 post:

imageThe AWS VM Import service gives you the ability to import virtual machines in a variety of formats into Amazon EC2, allowing you to easily migrate from your on-premises virtualization infrastructure to the AWS Cloud. Today we are adding the next element to this service. You now have the ability to export previously imported EC2 instances back to your on-premises environment.

imageYou can initiate and manage the export with the latest version of the EC2 command line (API) tools. Download and install the tools, and then export the instance of your choice like this:

ec2-create-instance-export-task –e vmware -b NAME-OF-S3-BUCKET INSTANCE-ID

Note that you need to specify the Instance ID and the name of an S3 bucket to store the exported virtual machine image.

You can monitor the export process using ec2-describe-export-tasks and you can cancel unfinished tasks using ec2-cancel-export-task.

Once the export task has completed you need only download the exported image to your local environment.

The service can export Windows Server 2003 (R2) and Windows Server 2008 EC2 instances to VMware ESX-compatible VMDK, Microsoft Hyper-V VHD or Citrix Xen VHD images. We plan to support additional operating systems, image formats and virtualization platforms in the future.

Let us know what you think, and what other features, platforms and operating systems you would like us to support.


<Return to section navigation list>

0 comments: