Monday, January 03, 2011

Windows Azure and Cloud Computing Posts for 1/3/2011+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H640px3   
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

imageMy Windows Azure Uptime Report: OakLeaf Table Test Harness for December 2010 (99.90%) post of 1/3/2011 shows that my app running in Microsoft’s South Central USA (San Antonio, Texas) data center is still maintaining 99.90% uptime or better with a single Web Role instance.


Rob Blackwell uploaded cl-azure, which lets you access Windows Azure cloud storage from Common Lisp, to github on 1/3/2010:

Introduction

image… This project is an incomplete, proof-of-concept implementation of a Common Lisp library for accessing  [Windows Azure’s] storage features, namely Blobs, Tables and Queues.

Demo / Instructions

[Sample Lisp code elided for brevity.]

Why?

This code was the result of some Christmas holiday hacking inspired by the Land of Lisp book, the recent availability of Quicklisp and Zach Beane's open source code for Amazon Web Services.

I hope it demonstrates that Windows Azure is an open, cross platform cloud storage system that isn't tied to Windows or .NET.

If you're not a Lisper, I'd encourage you to look at the above trace a line at a time, and see just how interactive and incremental the programmer experience is. Tables, Blobs, etc., come back as Lists so it would be easy to slice and dice them. When you want to get under the covers and see what's going on at the HTTP and XML level, that's easy too.

It might turn out to be a useful debugging and exploration tool for my Windows Azure consultancy work.

Next Steps

I hope to get some time to flesh this out more during 2011 and provide some better documentation. Any comments, feedback, constructive criticism or code contributions welcome!


<Return to section navigation list> 

SQL Azure Database and Reporting

Dan Wissa described Migrating a SQL database to SQL Azure in this 1/3/2011 post:

Happy New Year 2011 to you all.

Continuing on with my previous posts on Windows Azure I thought I?d write up a post covering migrating a SQL Server database from a local SQL Server and into the cloud.

The conversion of a SQL Server database into SQL Azure for publishing in the cloud can be done by the SQL Azure Migration Wizard which you can find on CodePlex. For the purpose of this post I have decided to try and Migrate the AdventureWorks sample database into SQL Azure format.

Once you?ve installed the AdventureWorks databases from the link above when you logon to your local instance of SQL Server you should see the following databases listed.

sq2

Now your databases are ready and you can start the conversion wizard by running the SQL Azure Migration wizard executable which presents you with the following screen.

sq1

As you can see above the SQL Azure Migration wizard will allow you to migrate a T-SQL script file into the SQL Azure format as well as Migrate a SQL Server DB into the SQL Azure format. I have chosen the SQL DB option as shown below.

sq3

You are then prompted to enter the logon details for logging into the SQL Server instance that has the database to be migrated as shown below.

sq4

sq5

After choosing which database you wish to migrate you can then choose which database objects you want to migrate via the standard or advanced options as shown below.

sq6

For this post I?ve chosen the Script all database objects option. The following screen then shows a summary of the selection options chosen from the previous screens.

sq7

Then you are asked to confirm the generation of the script before proceeding.

sq8

Once the script is generated you will get the following screen showing a log of all actions of the script

sq10

You can then go ahead and execute this script on your SQL Azure instance as shown below.

Enter your azure DB connection details

sq11

Select the Database to execute the queries on or create a new database.

sq12

Run the script

Once the script has been completed you will see the following screen confirming the success/failure of the different operations that have been undertaken by the script.

sq15

Finally, you can then connect directly to your SQL Azure instance from SSMS to check the items generated by the migration as shown below.

sq16

Most tutorials use the abbreviated AdventureWorksLT database, rather than the full AdventureWorks version, which includes many more incompatible features.


The MSCerts.net blog delivered another episode to Migrating Databases and Data to SQL Azure (part 5) - Creating an Integration Services Project on 12/28/2010:

2. SQL Server Integration Services

imageSQL Server Integration Services (SSIS) is a data-integration and workflow-solutions platform, providing ETL (Extract, Transformation, Load) solutions for data warehousing as well as extractions and transformations. With its graphical tools and wizards, developers often find that SSIS is a quick solution for moving data between a source and destination. As such, it's a great choice for migrating data between a local database and a SQL Azure database. Notice, however, that the previous sentence says data. When you're using SSIS, the database and tables must already exist in SQL Azure.

NOTE

Volumes of information (books, articles, online help, and so on) are available about SSIS. This section isn't intended to be an SSIS primer. If you're unfamiliar with SSIS, this section provides enough information to give you a foundation and get you started.

If you're familiar at any level with SSIS, you're probably wondering why it has the limitation of only moving data. Several SSIS tasks can provide the functionality of moving objects as well data, such as the Transfer SQL Server Objects task. When asked about this task, Microsoft replied that SSIS relies on SMO (SQL Server Management Objects) for this task, and SMO doesn't currently support SQL Azure. In addition, some of the SSIS connection managers use SMO and therefore are limited when dealing with objects. Thus, the current solution is to create databases and tables using straight SQL and then use SSIS to do the actual data transfer. The following section illustrates how to use SSIS move migrate your data from on-premise SQL to SQL Azure.

2.1. Creating an Integration Services Project

To create your project, follow these steps:

  1. Fire up Business Intelligence Development Studio (BIDS) by choosing Programs → Microsoft SQL Server 2008 → Business Intelligence Development Studio.

  2. When BIDS opens and the New Project dialog displays, select Business Intelligence Projects from the list of project types, and then select Integration Services Project, as shown in Figure 6. Click OK.

Figure 6. Creating a new SSIS project

You now see an SSIS package designer surface. This surface has several tabs along the top: Control Flow, Data Flow, Event Handlers, and Package Explorer, shown in Figure 7. This example uses the Control Flow and Data Flow tabs.

Figure 7. SSIS Designer

In Visual Studio, select View → Toolbox. The Toolbox contains a plethora of what are called tasks, which are control and data-flow elements that define units of work that are contained and preformed within a package. You use a few of these tasks to migrate the data from your local database to your SQL Azure database.

Related posts:

SSIS is the hard way to migrate SQL Server data to SQL Azure.


<Return to section navigation list> 

MarketPlace DataMarket and OData

Brian Noyes posted WCF RIA Services Part 10 - Exposing Domain Services To Other Clients to the Silverlight Show blog on 1/3/2011:

This article is the tenth and final part of the series WCF RIA Services:

  1. Getting Started with WCF RIA Services
  2. Querying Data Through WCF RIA Services
  3. Updating Data Through WCF RIA Services
  4. WCF RIA Services and MVVM
  5. Metadata Classes and Shared Code in WCF RIA Services
  6. Validating Data with WCF RIA Services
  7. Authenticating and Authorizing Calls in WCF RIA Services
  8. Debugging and Testing WCF RIA Services Applications
  9. Structuring WCF RIA Services Applications
  10. Exposing Additional Domain Service Endpoints for Other Clients
Introduction

image As mentioned in Part 1 of this article series, WCF RIA Services only supports code generating the client proxy code in Silverlight projects. However, that does not mean you cannot call your domain services from other clients. If I were not going to have a Silverlight client application as my primary client application, I would not bother defining my services as domain services. I would instead define normal WCF services or possibly WCF Data Services. To me, most of the benefit of WCF RIA Services is in the code generated client proxy code and client side framework. The validation support, the security model, the service error handling, and the deferred query execution are the things I think are most compelling about using RIA Services.

But If I do have a Silverlight client and use RIA Services, I probably don't want to have to implement a separate set of services for my non-Silverlight clients. The good news is, you don't have to. It is easy to expose additional endpoints from your domain services that can be consumed by other clients. In this article, I'll show you how to enable those endpoints, and will show what is involved in consuming your RIA domain services from non-Silverlight clients. Your options include exposing a read-only OData service endpoint, a full functionality SOAP endpoint compatible with the basicHttpBinding in WCF, or a full functionality REST JSON encoded endpoint.

You can download the source code for this article here.

Exposing an OData Endpoint From Your Domain Service

imageOData is short for the Open Data Protocol. It is a REST-based web service protocol for exposing data for querying via web services, and optionally allowing updates via that web service as well. You can read up on OData at http://www.odata.org/. OData uses the ATOM protocol for encoding the data in the HTTP body of the REST messages that flow from and to your service. OData allows you to express a complex query through parameters in the URL that is used to address the service. OData supports a subset of the common LINQ query operations such as filtering (the Where operation in LINQ), projection (the Select operation in LINQ), and paging (Take and Skip operations in LINQ). Additionally, the OData protocol allows you to send inserts, updates, and deletes for an exposed entity collection if the service allows it.

RIA Services allows you to expose a query-only OData endpoint (no updates) from your domain services. The exposed feed only allows you to retrieve the entire collection exposed by a query method. You cannot pass query filters or paging operations down to the service through the OData protocol, so the functionality is fairly limited at the current time. In a future release of WCF RIA Services they will probably support updates and more complex query operations, but for now you can basically just call your domain service query methods and return the collection that the server method returns without being able to filter from the client side.

This capability is part of the core WCF RIA Services libraries. All you need to do is remember to check the box when you first create your domain service as shown in the following figure.

Add Domain Service

If you forgot to do that when you created the domain service, don’t fret. You can always add a new domain service to the project and check the box for that domain service, and then delete that domain service. Checking the box adds a sectionGroup to your config file, adds a new domainServices section to the system.serviceModel part of your config file, and adds another reference to the project to the System.ServiceModel.DomainServices.Hosting.OData.dll library. Additionally, the [Query] attribute is added to each of the entity query methods added by the wizard. If you forget to check the box, you will need to add those attributes yourself as well.

 1: [Query(IsDefault=true)]
 2: public IQueryable<Task> GetTasks()
 3: { ... }
 4:  
 5: [Query(IsDefault = true)]
 6: public IQueryable<TimeEntry> GetTimeEntries()
 7: { ... }
 8:  
 9: [Query(IsDefault = true)]
 10: public IQueryable<Customer> GetCustomers()
 11: { ... }

The sectionGroup it adds to the config file looks like this:

 1: <system.serviceModel>
 2:   <domainServices>
 3:     <endpoints>
 4:       <add name="soap"
 5:            type="Microsoft.ServiceModel.DomainServices.Hosting.SoapXmlEndpointFactory, 
 6:                  Microsoft.ServiceModel.DomainServices.Hosting, Version=4.0.0.0, 
 7:                  Culture=neutral, PublicKeyToken=31bf3856ad364e35" />
 8:       ...
 9:     </endpoints>
 10:   </domainServices>
 11:   ...
 12: </system.serviceModel>

You will also need to add a reference in the web host to Microsoft.ServiceModel.DomainServices.Host, which is where the SoapXmlEndpointFactory type is defined as you can see from the config code above.

That endpoint does have metadata turned on, so clients can easily generate client proxy code from the endpoint like they would from any other WCF service. The address that this endpoint is exposed on is just the base domain service address with /soap appended to it.

Consuming the SOAP Endpoint From a .NET Client

What if you want to consume that OData feed? There are a variety of tools out there that can consume OData feeds, including the OData Explorer, a plug in for Excel, and other tools. If you want to consume that data by querying it via services from another .NET client, it is very easy because Visual Studio can generate a client proxy for you in any .NET project. There is also a command line tool called DataSvcUtil.exe that can do the same client code generation, making it easy to consume the feed.

To demonstrate this, I can add a WPF Application project to the solution. I then select Add Service Reference from that project and enter the address to the odata feed:

http://localhost:11557/TaskManager-Web-TasksDomainService.svc/odata

You should see that the service is found, and you will see a collection set for each of your entities that you have the [Query(IsDefault=true)] attribute on:

Add Service Ref OData

After you click OK, a client proxy will be generated. The generated OData proxy is quite different than a normal WCF service proxy. Instead of exposing methods that you call on the service, it exposes the entity sets. With normal OData services, you can form LINQ queries on those entity sets, and when you iterate over that expression (or call ToList or Count or other LINQ operators that do not defer execution), they will actually execute on the server side. The OData proxy sends the query expression tree to the server in a similar way to how WCF RIA Services does, by forming a query string from the expression tree of the LINQ expression as I explained in Part 3.

Unfortunately, the RIA Services implementation only allows you to ask for the entire entity set via OData. If you try to use other LINQ expressions such as Where or Take, you will get an error back that says “Query options are not allowed.” However, you can use the OData feed to retrieve all the entities exposed by a domain service query method.

For example, I could execute the following code in the WPF client:

 1: Uri serviceUri = new Uri("http://localhost:58041/TaskManager-Web-TasksDomainService.svc/odata");
 2: TasksDomainService proxy = new TasksDomainService(serviceUri);
 3: List<Task> tasksBefore2Jun = proxy.TaskSet.ToList();

This would return the whole list of Tasks from the server, and then the client could present that data. However, if it allowed the user to edit the data, there would be no way to send the changes back to the server via the OData endpoint. Notice that the way the proxy works is by exposing entity sets as a property on the proxy itself. Also notice that the proxy requires a URL to the service on construction. There is no contract or binding associated with an OData proxy and you pass the address in through the constructor, so there is no need for client configuration of the proxy either.

Exposing a SOAP Endpoint From Your Domain Service

If you want to be able to execute your query and update methods from other clients, then you can use the SOAP or JSON endpoints that can also be enabled on your domain service. These require that you download and install the RIA Services Toolkit in addition to having the core RIA Services functionality that you get through the Silverlight 4 Tools for Visual Studio 2010.

The SOAP endpoint is a WCF basicHttpBinding compatible endpoint that can be easily consumed by just about any platform that speaks SOAP. To add the SOAP endpoint, you just add another endpoint in the domainServices section in your config, in the same place as the OData endpoint shown earlier. It looks like this:

 1: <system.serviceModel>
 2:   <domainServices>
 3:     <endpoints>
 4:       <add name="soap"
 5:            type="Microsoft.ServiceModel.DomainServices.Hosting.SoapXmlEndpointFactory, 
 6:                  Microsoft.ServiceModel.DomainServices.Hosting, Version=4.0.0.0, 
 7:                  Culture=neutral, PublicKeyToken=31bf3856ad364e35" />
 8:       ...
 9:     </endpoints>
 10:   </domainServices>
 11:   ...
 12: </system.serviceModel>

You will also need to add a reference in the web host to Microsoft.ServiceModel.DomainServices.Host, which is where the SoapXmlEndpointFactory type is defined as you can see from the config code above.

That endpoint does have metadata turned on, so clients can easily generate client proxy code from the endpoint like they would from any other WCF service. The address that this endpoint is exposed on is just the base domain service address with /soap appended to it.

Consuming the SOAP Endpoint From a .NET Client

To consume the SOAP Endpoint, you just do a normal Add Service Reference in the client project, or use svcutil.exe, or hand-code a proxy class using the ClientBase<T> base class. Using Add Service Reference is the easiest if you are new to WCF Services.

To add a service reference to the SOAP endpoint, just point Add Service Reference or svcutil.exe to the default address of your domain service, http://localhost:58041/TaskManager-Web-TasksDomainService.svc for the sample application. That will generate the compatible proxy and configuration code for the client.

Then you could write client code like the following to retrieve the Tasks collection and make an update to one of the tasks and send it back to the service to persist the change:

 1: TasksDomainServicesoapClient proxy = new TasksDomainServicesoapClient();
 2: // Retrieve the full collection, 
 3: // no ability to filter server side unless additional methods exposed
 4: QueryResultOfTask result = proxy.GetTasks();
 5: Task[] tasks = result.RootResults; // Extract the real collection from the wrapper
 6:  
 7: // Make a modification
 8: tasks[0].TaskName = "Modified by SOAP Client";
 9:  
 10: // Wrap it in ChangeSetEntry
 11: ChangeSetEntry changeEntry = new ChangeSetEntry();
 12: changeEntry.Entity = tasks[0];
 13: changeEntry.Operation = DomainOperation.Update;
 14:  
 15: // Send the changes back to the server as an array of ChangeSetEntries
 16: proxy.SubmitChanges(new ChangeSetEntry[] { changeEntry });
 17: Task[] newFetchTasks = proxy.GetTasks().RootResults;
 18:  
 19: proxy.Close();

Most of the complexity in dealing with the SOAP endpoint is in wrapping up changes in the ChangeSetEntries. That type supports sending the original entity and the modified entry back as well for optimistic concurrency checking or so that the server side can optimize the query by knowing which properties have actually changed on the object. Other than the wrapping of the entities, this is just normal WCF proxy-based service calls.

In the sample code for this article, I turned off security to keep things focused on the basic mechanisms of exposing the endpoints and calling them. To secure the endpoints, you would again just leave the [RequiresAuthentication] attribute on the domain service and add an AuthenticationDomainService as discussed in Part 7. On the client side, you would need to make a call to the authentication domain service first to establish a login session. You would also need to enable a cookie container on the proxy for both the authentication domain service endpoint and the other domain services you want to call. Finally, you would need to copy the cookie container from the authentication service proxy to the other proxies after logging in. For a great walkthrough on this in the context of a Windows Phone 7 client, see this blog post by Marcel de Vries.

Exposing a REST/JSON Endpoint

Exposing a REST/JSON style endpoint from your domain service that functions just like the SOAP one just described, it is just another endpoint declaration in your configuration file.

 1: <add name="JSON"
 2:      type="Microsoft.ServiceModel.DomainServices.Hosting.JsonEndpointFactory, 
 3:            Microsoft.ServiceModel.DomainServices.Hosting, Version=4.0.0.0, 
 4:            Culture=neutral, PublicKeyToken=31bf3856ad364e35"
 />

You can then use your favorite approach such as a WebClient or HttpWebRequest in .NET to issue the HTTP request to the REST endpoint, and can use something like the WCF DataContractJsonSerializer to decode and encode the JSON payload in the HTTP body of the message. The address scheme is based on the addressing scheme of WCF service methods that you expose via REST. For example, to call the GetTasks method, you would just address http://localhost:58041/TaskManager-Web-TasksDomainService.svc/json/GetTasks.

Summary

As you can see, it is a fairly simple matter to expose the additional endpoints for OData, SOAP, and REST/JSON from your domain services. Because of the limitations on the OData endpoint in the current release, I find that one to be the least useful. However, the SOAP and REST endpoints do make it fairly easy to consume your domain services from other platforms. If I needed to provide CRUD services for a set of entities and needed to write client applications on multiple platforms with the full set of functionality, and wanted to make it as easy as possible for others to write clients for my services, I would not use WCF Data Services for that. I would use either WCF Data Services to expose a fully functional OData endpoint, or I would write normal WCF Services where I was not constrained by the server side model of WCF Data Services. However, if I was writing a complex Silverlight application that was the primary client application, and just wanted to be able to expose some of the same entity CRUD functionality to other clients without needing to write separate services for them or give up the client side benefits of WCF RIA Services for my Silverlight client, then these additional endpoints are just what is needed.

So that brings me to the end of this series on WCF Data Services. And I happen to be writing this on New Years Eve (day) of 2010, so it happens to also be the end of a year and end of a decade as well. Keep an eye on my blog at http://briannoyes.net/ for additional posts about WCF RIA Services, and I will probably write some other articles on this and other topics here on The SilverlightShow as well. Thanks for reading, and please let me know any feedback you have through the comments.

You can download the source code for this article here.

Brian is Chief Architect of IDesign, a Microsoft Regional Director, and Connected System MVP.


RussH posted DataMarket Fixed Query Demo to the MSDN Code Gallery on 1/1/2010:

Resource Page Description

  • image C# Web app that demonstrates Windows Azure Marketplace DataMarket fixed queries
  • Uses free Zillow APIs to discover pertinent information for a real estate purchase.

User input:

  • State abbreviation (ex. WA)
  • Mortgage (ex 350000)
  • Zip Code (ex 13353)
  • Down payment (ex 35000)

Output:

  • Current and last week's interest rates for:
  • 30 year fixed
  • 15 year fixed
  • 5/1 ARM
  • Monthly payments based upon input values and current interest rates (P&I, Mortgage Insurance, Down payment, Estimated property taxes, and Estimated hazard insurance)

Different license terms apply to different file types:

  • Source code files are governed by the MICROSOFT PUBLIC LICENSE (Ms-PL)
  • Binary files are governed by MSDN CODE GALLERY BINARY LICENSE
  • Documentation files are governed by CREATIVE COMMONS ATTRIBUTION 3.0 LICENSE

<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

The Online Business Blog posted a brief review of Programming WCF Services: Mastering WCF and the Azure AppFabric Service Bus by Juval Löwy on 1/3/2011:

image722322ISBN13: 9780596805487

Here’s Amazon.com’s Product Description and author bio:

Product Description

Programming WCF Services is the authoritative, bestselling guide to Microsoft's unified platform for developing modern service-oriented applications on Windows. Hailed as the definitive treatment of WCF, this book provides unique insight, rather than documentation, to help you learn the topics and skills you need for building WCF-based applications that are maintainable, extensible, and reusable.

Author Juval Löwy -- one of the world's top .NET experts -- revised this edition to include the newest productivity-enhancing features of .NET Framework 4 and the Azure AppFabric Service Bus, as well as the latest WCF ideas and techniques. By teaching you the why and the how of WCF programming, Programming WCF Services will help you master WCF and make you a better software engineer.

  • Learn about WCF architecture and essential building blocks, including key concepts such as reliability and transport sessions
  • Use built-in features such as service hosting, instance and concurrency management, transactions, disconnected queued calls, security, and discovery
  • Master the Windows Azure AppFabric Service Bus, the most revolutionary piece of the new cloud computing initiative
  • Increase your productivity and the quality of your WCF services by taking advantage of relevant design options, tips, and best practices in Löwy's ServiceModelEx framework
  • Discover the rationale behind particular design decisions, and delve into rarely understood aspects of WCF development

"If you choose to learn WCF, you've chosen well. If you choose to learn with the resource and guidance of Juval Löwy, you've done even better... there are few people alive today who know WCF as well."

--Ron Jacobs, Senior Technical Evangelist for WCF, Microsoft Corporation

About the Author

Juval Lowy is a software architect and the principal of IDesign, specializing in .NET architecture consulting and advanced training. Juval is Microsoft’s Regional Director for the Silicon Valley, working with Microsoft on helping the industry adopt .NET 4.0. He participates in the Microsoft internal design reviews for future versions of .NET and related technologies. Juval has published numerous articles, regarding almost every aspect of .NET development, and is a frequent presenter at development conferences. Microsoft recognized Juval as a Software Legend, one of the world's top .NET experts and industry leaders.

Agreed. I bought my copy a couple of months ago.


<Return to section navigation list> 

Windows Azure Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Eric Knorr asserted “Tony Scott talks frankly about Redmond's practice of making employees beta testers -- for Microsoft cloud services and conventional software” in a deck for his Microsoft CIO: We're dog-fooding the cloud article of 1/3/2011 for InforWorld’s Cloud Computing blog:

image I first spoke with Tony Scott in 2003 when he was CTO of General Motors. At the time, the obsession of the day was Web services, which Scott wryly called "an excuse to get people to talk together" about business processes, a role many grand IT initiatives fill.

image Today Scott is CIO of Microsoft, a position he's held since Feb. 2008. When I interviewed him just before the holiday, the main excuse for us to talk was the tech industry's current obsession, cloud computing -- and how Microsoft is leveraging its own vast cloud computing infrastructure to serve its employees. We also touched on the consumerization of IT and how he is supporting a glut of new mobile devices.

Between his GM and Microsoft jobs, Scott served as CIO for the Walt Disney Company; further in the past, he was vice president of information services at Bristol-Myers Squibb. This golden resume puts Scott in a rarified group of IT leaders who are comfortable running huge global IT operations.

But being CIO of Microsoft is not the same as being CIO of any old $200 billion corporation. We began by talking about that difference.

Eric Knorr: What's it like to be CIO of a tech company like Microsoft? I bet it's fun have a bunch of technologists always second-guessing what you're doing.

Tony Scott: Well, but it's no different than at home. [laughs] Everybody is getting used to this world of having technology at their fingertips, and there's a belief that what scales at my house should scale in the corporate world. I find it fun and endlessly challenging in terms of how we're going to solve some of these big problems where people want access to everything all the time, in a very convenient way on whatever device they happen to be on or in or around.

Knorr: Just to be clear about your purview, you have nothing to do with the big new cloud infrastructure data centers? You're concerned with supporting 4,000 Microsoft users.

Scott: Maybe I should give a little context here. One of the roles that Microsoft IT plays, and has played for a long time, is to dog-food all of the products that are destined for the enterprise. In the past, that would have meant that when we did a new release of an operating system or a new Office release or whatever, we would start in the very early phases of the development cycle, deploying in very small quantities -- and then over the course of the development cycle, deploying internally at greater and greater scale.

imageIn the example of Windows 7, by the time it reached its beta phase, we had virtually every employee in Microsoft all around the world on the beta release. Virtually every product that we sell into the enterprise follows that pattern. The world of services, whether it's the business productivity stuff or Azure or whatever, we treat no differently. I have many large projects underway now that are on our cloud infrastructure and will be delivered over the next couple of years.

Knorr: So how do you reconcile dog-fooding beta software with supporting your users? You have to test products for the business, but you also have to keep your users happy and make sure everything works. …

Read more: 2, 3, 4, 5, next page ›


<Return to section navigation list> 

Visual Studio LightSwitch

image2224222No significant articles today.

 


Return to section navigation list> 

Windows Azure Infrastructure

James Urquhart posted 'Go to' clouds of the future, part 1 on 1/3/2011 to C|Net News’ Wisdom of Clouds blog:

image I am often asked which companies I think will be the most dominant names in IT 10 years from now, thanks to cloud computing. My answer often surprises those who ask; not because the two companies I believe will be the most recongnized names in cloud-based IT services aren't considered players today, but rather because of why I believe they will be so recongized.

In this, the first of two posts exploring the companies that can best exploit the cloud model, I'll identify those two companies and explain why they best fit the needs of a large percentage of IT service customers. Then, in the second part of this series, I will explore several companies that will challenge those two leaders, possibly taking a leadership spot for theirselves.

But before I get into who these leaders are, I have to explain why success in cloud computing will be different in 10 years than it is today.

Why today's clouds don't represent tomorrow's biggest opportunity

imageThink about what cloud computing promises. Imagine being a company that relies on technology to deliver its business capability, but does not sell computing technology or services itself. Picture being able to deliver a complete IT capability to support your business, whatever it is, without needing to own a data center--or at least any more data center than absolutely necessary.

Imagine there being a widely available ecosystem to support that model. Every general purpose IT system (such as printing, file sharing or e-mail) has a wide variety of competing services to choose from. Common business applications, such as accounting/finance, collaboration/communications and human resources, have several options to choose from. Even industry specific systems, such as electronic health records exchanges and law enforcement data warehouses, have one or more mature options to choose from.

Need to write code to differentiate your information systems? There will be several options to choose from there, as well. Most new applications will be developed on platform as a service options, I believe, with vendors meeting a wide variety of potential markets, from Web applications to "big data" and business intelligence to business process automation. However, if you want (or need) to innovate the entire software stack, infrastructure services will also be readily available.

With such a rich environment to choose from, what becomes your biggest problem? I would argue that's an easy question to answer: integration. Your biggest problem by far is integrating all of those IT systems into a cohesive whole.

In fact, we see that today. Most cloud projects, even incredibly successful ones like Netflix's move to Amazon Web Services, focus efforts within one cloud provider or cloud ecosystem, and usually include applications and services that were developed to work together from the ground up. While there have been attempts to move and integrate disparate IT systems across multiple clouds, none of them stand out as big successes today.

While some may argue that's a sign of the nascent nature of cloud, I would argue that its also a sign that integrating systems across cloud services is just plain hard.

Why integrated services will drive the most revenue
Now imagine you are founding a small business like a consultancy or a new retail store. You need IT, you need it to "just work" with minimal effort and/or expertise, and you need it to be cost effective. What are you going to be looking for from "the cloud?"

There, again, I would argue the answer is easy: start-ups and small businesses will be seeking integrated services, either from one vendor, or a highly integrated vendor ecosystem. The ideal would be to sign up for one online account that provided pre-integrated financials, collaboration, communications, customer relationship management, human resources management, and so on.

In other words, "keep it simple, stupid." The cloud will someday deliver this for new businesses. But there are very few companies out there today that can achieve broad IT systems integration. I would argue the two most capable are Microsoft and Google.

"What?!?," you might be saying. "Both of those companies have been tagged as fading dinosaurs by the technorati in the last year. Why would anyone want to lock themselves into one vendor for IT services when the cloud offers such a broad marketplace--especially those two?"

To answer that, we need to look a little more closely at each vendor's current offerings, and stated vision.

Microsoft: it's all about the portfolio, baby!

imageMicrosoft stands out for its breadth of offerings. While its infrastructure as a service and platform as a service offerings (both part of Azure) are central to its business model, it's the applications that will ultimately win them great market share.

Already, offerings such as Office 365 provide cloud-based versions of key collaboration and communications capabilities for a variety of business markets. However, Microsoft CEO, Steve Balmer, has also made it clear that every Microsoft product group is looking at how to either deliver their products in the cloud, or leverage the cloud to increase the utility of their products.

As every product group within Microsoft pushes to "cloudify" their offerings, I am betting similar effort will be put in to making sure the entire portfolio is integrated.

Combine the Dynamics portfolio with Sharepoint and Lync and add "Oslo"-based tools to integrate across system or organizational boundaries, and you've got a heck of an IT platform to get started with. Add in Azure, and you have the development platform services to allow you to customize, extend, or innovate beyond the base capabilities of Microsoft's services.

Google: Bringing consumer success to business

What impresses me most about Google's move towards the cloud has been its pure focus on the application. Google doesn't put forth offerings targeted at providing raw infrastructure. Even Google App Engine, one of the poster children of the platform as a service model, is built with making a certain class of applications--perhaps not surprisingly, Web applications--as easy to develop as possible. Most of the integration of the underlying platform elements has been done for the developer already.

However, it's when you look at its consumer application portfolio, and how it's modifying those applications for business, that you can see its real strength. Google takes chances on new Web applications all the time, and those who succeed--either by building a large user base, or by actually generating revenue--draw additional investment aimed at increasing the application's appeal to a broader marketplace. Google Mail is the most mature of these options, but Google Apps is not far behind.

What appears to be happening now, however, is a concerted effort by Google to build an ecosystem around its core application offerings. The Google Apps Marketplace is a great example of the company trying to build a suite of applications that integrate with or extend its base Google Apps and mail offerings.

Add the company's nascent suite of communications and collaboration tools, such as Google Voice and Buzz, and signs of integration among all of their offerings, and you can see the basis of a new form of IT platform that will especially appeal to small businesses and ad hoc work efforts.

There are no guarantees in cloud

As you can see, Microsoft and Google have the basic tools and expertise to deliver on the one-stop shop IT services model, and both have proven to me that they have the desire as well. However, neither company is a shoe-in for success in this space. There are two reasons for this, the most important of which is neither company has what I would call a spotless execution record. In fact, both have struggled mightily to impose change on their core business models.

Both companies will have to align their various efforts to see this vision through, even as it disrupts current markets. Each has plenty of applications that show great promise, but both are also a long way away from proving they can deliver on a one-stop shop vision.

The other reason is that there are a variety of worthy competitors vying for the "one-stop" throne. You may have been asking by now about Amazon, Salesforce.com, VMware, or the hosting companies, and telecoms. In the second post of the series, I'll outline my favorites to displace the two leaders, including one that may surprise you.

In the meantime, I think cloud services targeting developers will still get most of the press for the next several years. Achieving an integrated IT platform that serves multiple business markets is extremely difficult, and will take a true commitment and concerted effort by the company or companies that ultimately achieve that vision.

James is a market strategist for cloud computing at Cisco Systems and an adviser to EnStratus.



Manu Cohen-Yashar continued his series with Azure Elasticity– Part 2 on 1/2/2011:

imageIn the last post [see below]I wrote about elasticity policy that defines when to change the number of instances.

In this post I will write about how to actually do that.

Let us say you have a service with 2 instances and you want to increment the number of instances by 1.

The simplest way to that is using the portal.

image

image

image

Well it is nice and easy but it is not automated. Maybe this is a good thing. Remember: More instances means more money to pay ! The fact that the process of creating an instance is not automated means that we stay in full control. On the other hand humans are slow and expensive and if there is no human around (weekends…) we might find ourselves with not enough computing resources for quite some time.

The second option is to use powershell

image

The scripts to increment the number of instances would look something like this

$cert = Get-Item cert:\CurrentUser\My\CertName
$sub = "CCCEA07B-1E9A-5133-8476-3818E2165063"
$servicename = 'myazureservice'
$package = "c:\publish\MyAzureService.cspkg"
$config = "c:\publish\ServiceConfiguration.cscfg"

Add-PSSnapin AzureManagementToolsSnapIn

Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |
    Get-Deployment -Slot Production |
    Set-DeploymentStatus 'Running' |
    Get-OperationStatus -WaitToComplete

Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |
    Get-Deployment -Slot Production | 
            Set-DeploymentConfiguration {$_.RolesConfiguration["WebUx"].InstanceCount += 1}

To automate this you can spin a process that will run powershell yourscript.cmd. As I wrote in the last post you can create rules and monitor performance characteristics to decide when to run the script.

Another option is to use the management API.

The management API is a REST API that can be used to execute any management task that can be executed in the portal.

csManage is a command line tool that abstracts the API. This is a sample provided with Windows Azure Samples so you have the code. It is easy to convert this to a simple API that you can use in your project
you can read interesting post franksie wrote about csManage.

The last Option I would like to talk about is "Elasticity using tools".

Here we are in the beginning of the road… I imagine that many tools are being developed right now (Including System Center Support)

Igor Papirov pointed me to nice tools (but a little expensive) called AzureWatch that automate exactly what I described in these two posts. It will monitor the performance of your app and then it will spin new instances or kill existing instances according to your rules.

Well, now it is for you to decide how to implement elasticity.


Manu Cohen-Yashar started a new series with Azure Elasticity–Part 1 on 12/28/2010:

imageIn all Azure presentation the term "Pay as you use" repeats itself. The question is how do you achieve that? The answer is Elasticity

Elasticity is the capability to create more instances when demand goes up and to delete redundant instances when demand goes down.

imageNow we have to ask ourselves how we implement that. Well there are few alternatives which I want to describe in this new series of posts.

To implement Elasticity you have to:

  1. Identify the need for more instances or the fact that there are too many.
  2. Perform the action of increasing or decreasing the number of instances.

How to identify the need for more Instances? There are several strategies:

  1. It is possible to check performance characteristics for example latency. When a threshold is exceeded the number of instances is increased.
    The problem here is how to decide when to decrease the number of instances. If performance is OK this does not always mean that there are redundant instances.
  2. Create an architecture that provides information about the need for new instances or about redundant instances. Such architectures are usually based on queues. All requests are sent  to computing instances via queues.
    The queue length can be measured.  Above a certain threshold the number of instances is incremented below another threshold the number of instances is decreased. Of course this is a simple policy. It is possible to implement more complicated \ modern policies based on the queue length.
  3. Rule based: Sometimes you do not need to identify – You just know. If the load is predictable (for example on the 1th day each month salaries should be processed) you can plan the number of instances you need.
    The problem: How can you be sure that your predictions are correct? 

I recommend not to use only one of the above strategies but to use a combination.

For example: Plan for a certain number of instances but continue to check the queues length and identify the need for unplanned instances required.

One thing you have to remember: The number of instances is the number one factor in the bill you will receive at the end of the month. Someone has to be responsible for the number of instances and make sure they do not exceed the budget provided for the project. This is usually the IT Pro. I recommend enforcing a policy that will throttle the number of instances. Remember that there is always a possibility that your elasticity engine has a bug and too many instances are created.

In the next post I will describe how to implement the action of increasing and decreasing the number of instances.


John Treadway (@cloudbzz) posted A Vision of the Future Cloud Data Center to his CloudBzz blog on 1/3/2011:

image A new year is often a time for reflection on the past and pondering the future.  2010 was certainly a momentous year for cloud computing.  An explosion of tools for creating clouds, a global investment rush by service providers, a Federal “cloud first” policy, and more.  But in the words of that famous Bachman Turner Overdrive song — “You ain’t seen nothin’ yet!”

In fact, I’d suggest that in terms of technological evolution, we’re really just in the Bronze Age of cloud.  I have no doubt that at some point in the not too distant future, today’s cloud services will look as quaint as an historical village with no electricity or running water.  The Wired article on AI this month is part of the inspiration for what comes next.  After all, if a computer can drive a car with no human intervention, why can’t it run a data center?

Consider this vision of a future cloud data center.

The third of four planned 5 million square foot data centers quietly hums to life.  In the control center, banks of monitors show data on everything from number of running cores, to network traffic to hotspots of power consumption.  Over 100,000 ambient temperature and humidity sensors keep track of the environmental conditions, while three cooling towers vent excess heat generated by the massively dense computing and storage farm.

The hardware, made to exacting specifications and supplied by multiple vendors, uses liquid coolant instead of fans – making this one of the quietest and most energy-efficient data centers on the planet.  The 500U racks reach 75 feet up into the cavernous space, though the ceiling is yet another 50 feet higher where the massive turbines draw cold air up through the floors.  Temperature is relatively steady as you go up the racks due to innovative ductwork that vents cold air every 5 feet as you climb.

Advanced robots wirelessly monitor the 10GBps data stream put off by all of the sensors, using their accumulated “knowledge and experience” to swap out servers and storage arrays before they fail. Specially designed connector systems enable individual pieces or even blocks of hardware to be snapped in and out like so many Lego blocks – no cabling required.  All data moves on a fiber backbone at multiple terabytes per second.

On the data center floor, there are no humans.  The PDUs, cooling systems and even the robots themselves are maintained by robots – or shipped out of the data center into an advanced repair facility when needed.  In fact, the control center is empty too – the computers are running the data center.  The only people here are in the shipping bay, in-boarding the new equipment and shipping out the old and broken, and then only when needed.  Most of these work for the shippers themselves.  The data center has no full-time employees.  Even security and access control for the very few people allowed on the floor for emergencies is managed by computers attached to iris and handprint scanners.

The positioning and placement of storage and compute resources makes no sense to the human eye.  In fact, it is sometimes rearranged by the robots based on changing demands placed on the data center – or changes that are predicted based on past computing needs.  Often this is based on private computing needs of the large corporate and government clients who want (and will pay for) increased isolation and security.  The bottom line – this is optimized far beyond what a logical human would achieve.

Tens of millions of cores, hundreds of exabytes of data, no admins.  Sweet.

The software automation is no less impressive.  Computing workloads and data are constantly optimized by the AI-based predictive modeling and management systems.  Data and computing tasks are both considered to be portable – one moving to the other when needed.  Where large data is required, the compute tasks are moved to be closer to the data.  When only a small amount of data is needed, it will often make the trip to the compute server.  Of course, latency requirements also play a part.  A lot of the data in the cloud is maintained in memory — automatically based on demand patterns.

The security AI is in a constant and all-out running battle with the bots, worms and viruses targeting the data center.  All server images are built with agents and monitoring tools to track anomalies and attack patterns that are constantly updated.  Customers can subscribe to various security services and the image management system automatically checks for compliance. Most servers are randomly re-imaged throughout the day based on the assumption that the malware will eventually find a way to get in.

Everything is virtualized – servers, storage, networking, data, databases, application platforms, middleware and more.  And it’s all as a service, with unlimited scale-out (and scale-in) of all components.  Developers write code, but don’t install or manage most application infrastructure and middleware components.  It’s all there and it all just works.

Component-level failure is assumed and has no impact on running applications.  Over time, as the AI learns, reliability of the software infrastructure underlying any application exceeds 99.999999%.

Everything is controllable through APIs, of course.  And those APIs are all standards-based so tools and applications are portable among clouds and between internal data centers and external clouds.

All application code and data is geographically dispersed so even the failure of this mega data center has a minimal impact on applications.  Perhaps a short hiccup is experienced, but it lasts only seconds before the applications and data pick up and keep on running.

Speaking of applications, this cloud data center hosts thousands of SaaS solutions for everything from ERP, CRM, e-commerce, analytics, business productivity and more. Horizontal and vertical applications too.  All exposed through Web services APIs so new applications – mashups – can be created that combine them and the data in interesting new use cases.  The barriers between IaaS, PaaS and SaaS are blurred and operationally barely exist at all.

All of this is delivered at a fraction of the cost of today’s IT model.

Large data center providers using today’s automation methods and processes are uncompetitive. Many are on the verge of going out of business and others are merging in order to survive.  A few are going into higher-level offerings – creating custom solutions and services.

The average enterprise data center budget is 1/10th of what it used to be. Only the applications that are too expensive to move or otherwise lack suitability for cloud deployment are still in-house managed by an ever-dwindling pool of IT operations specialists (everybody else has been retrained in cloud governance and management, or found other careers to pursue).  Everything else is either a SaaS app or otherwise cloud-hosted.

Special-purpose clouds within clouds are easily created on the fly, and just as easily destroyed when no longer needed.

The future of the cloud data center is AI-managed, highly optimized, and incredibly powerful at a scale never before imagined.  The demand for computing power and storage continues to grow at ever increasing rates.  Pretty soon, the data center described above will be considered commonplace, with scores or even hundreds of them sprinkled around the globe.

This is the future – will you be ready?

John’s post sounds to me much like the early hype for IBM’s autonomic computing and Microsoft’s Dynamic Systems Initiative (DSI). Michael Coté has more to say about DSI and the cloud in his Getting Cloud Crazy Microsoft TechEd 2010 post of 6/11/2010 about Windows Azure. For more from me, plus Microsoft’s Paul Flassner and Pat Helland, about “Service Centers” constructed of “Smart Bricks,” see my Very Large Databases: Bricks, BitVault and BigTable OakLeaf blog post of 4/6/2006:

image

Unfortunately, links to my articles for Fawcette Technical Publishing’s Visual Studio and Window Server System magazines about Autonomous Computing Cells and the like no longer work. Here’s an excerpt and infographic from My "Build Data Service Centers With Bricks" article for Fawcette Technical Publication's Windows Server System Magazine (May 2003 Tech*Ed issue)

Microsoft's example of a service center is a high-traffic e-commerce site in which relatively static reference data (product and customer information) and shopping-cart state is delivered by partitioned databases in individual autonomous computing cells (ACCs). The Order Processing System is a conventional, shared nothing database cluster, not an ACC. Brown text identifies the elements that David Campbell's original PowerPoint slide didn't include.

My preceding illustration looks a lot like a sharded SQL Azure database.


IDCTechTalk uploaded IDC's Frank Gens and 2011 Predictions for the Cloud as a 00:05:29 YouTube video on 12/14/2010 (missed when posted):

Frank Gens, IDC's Chief Analyst, discusses some of his Predictions for key Cloud related trends and events over the coming year, 2011. Frank gives his insight on public Cloud services adoption growth vs. private Cloud adoption growth as well as his thoughts on the critical area of Cloud Management. Finally, he discusses what could be the "death" of Cloud Computing... as a buzz word.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private Clouds

Yasser Abdel Kader continued his private cloud series with a Private Cloud Architecture - Part 4: Patterns post to the Team Blog of MCS @ Africa and Middle East:

image In the first part of this series (here and here), I started by some discussion regarding the basic definition that we will build upon toward achieving the Private Cloud Promises. In the second part (here and here) I discussed the Core Principles for Private Cloud. In the third part (here and here) I discussed the Core Concepts of Private Cloud.

image

In this final post, I will discuss the main patterns for Private Cloud that provides solution to commonly real life problems to enable the concepts and principles discussed before.

Resource Pooling: by combining storage into a storage resource pool and compute and network into compute resource pool, you will be able to divide the resources into partitions for management purposes. You can further think of this in the point of view of

1- Service management (separate resources by security or performance or customer).

2- Systems management (State of VMs: deployed, provisioned and failed).

3- Capacity management (total capacity of your private cloud).

Fault Domain (Physically): Knowing how a physical fault will impact a resource pool will affect the resiliency of the VMs. Private Cloud is resilient to small faults such as a failure of a VM or a Direct Attached Storage (DAS). But imagine that the private cloud is 20 racks each contain 10 servers, and for each rack you have one UPS. If the UPS fails, then the unit of physical fault domain is 10 servers.

Upgrade Domain: although VMs created an abstract layer, you will still need to update or batch the underlying physical server layer. Upgrade domain defines the grouping that can be used to migrate away the work load from it, upgrade the underlying physical servers then you can migrate back the work load to it without disrupting the existing services.

Reserve Capacity: providing a homogenize resource pool based approach provides the advantage of moving a VM from a fault server to a new one with the same capacity without a hit on performance. This means you will need to reserve some capacity to cater for resource decay, fault domain and upgrade domain patterns. There is no right answer for how many servers you will need to reserve as a reserve capacity. Available capacity is equal to the total capacity minus the reserve capacity

Scale Unit: you will need to think of the scale unit pattern from two perspectives, the compute scale which combine servers and network and the storage scale which includes the storage scale unit. The scale unit is the standard increments that will be added to the current capacity of the private cloud.

Capacity Plan: planning capacity will be done by utilizing the above patterns (reserve capacity, scale unit…etc.). You will need to cater for the normal factors (peak capacity, normal growth and accelerated growth). You will need also to think of the triggers defined to increase capacity based on some factors such as the scale unit, the lead time to provide H/W and installation time…etc.

Health Model: you will need to build a matrix to automatically detect if a hardware component failed, which VMs are failed as a result of that. This includes as well other environmental factors such as supply and cooling. Now, consider when a fault happened that moving workloads around the fabric is the responsibility of the health warning model.

Service Class: it describes how applications interact with the private cloud infrastructure. Applications can be designed to be stateless (least costly) as the application itself provides the redundancy and resiliency built in and the application doesn’t use that service from the private cloud fabric, or stateful where applications can benefit from fabric redundancy and resiliency through live migration (moving the workload from one VM to another)

Cost Model: compute cost in private cloud will be provided as a utility in a consumption-based charge model (like electricity), in that case the usage cost should account for the deployment, operations and maintenance.

In conclusion, the cloud computing promises a great deal to manage and utilize the Infrastructure (Compute, Storage and Network). It is still not mature yet, but there is a great effort nowadays in the IT field to shape the future based on that Model.

Yasser is a Regional Development Architect in the Microsoft Middle East and Africa HQ Team based in Cairo.

Presumably third-party WAPA vendors will implement the preceding pattern for large-scale private clouds with “hundreds or thousands of processors.”


David Kearns offered identity-related recommendations for Avoiding "cloud anguish" in a 1/3/2011 article for NetworkWorld’s Infrastructure Management blog:

image A very Happy New Year to you all. From what we saw in the “predictions” newsletters just before Christmas, those in the cloud computing arena should have the happiest time in 2011.  But I do have a word or two of caution.

image It was just a year ago, when I offered up predictions from various thought leaders in the IdM and IAM industry, that many were predicting 2010 would be the “Year of the Cloud.” I should have said it then, and I will say it now: the “year of” any technology isn’t recognized until long after it has passed. 2010 may well be labeled as the Year of the Cloud, but that won’t be for some time to come and I’m beginning to doubt that it will. Likewise 2011 won’t be the Year of the Cloud.

We’re already seeing a lot of “cloud anguish” from noted opinion makers. Many are suggesting that organizations who are dragging their feet about getting into cloud computing should consider so-called “private clouds” as a first step. Generally, security is one of the leading reasons for this suggestion. But this turns on its head all the reasons suggested for moving your enterprise apps to the cloud – the lack of manpower and expertise in house, the ease of maintenance, the 24/7 uptime.

image

So there’ll be lots and lots of talk about the cloud, both public and private – even outsourced private clouds – for the rest of this year. And the discussion will become more general, encompassing more people, if Microsoft continues their really confusing “To the cloud!” series of TV ad. None of this will make our job of explaining identity and the cloud any easier. Year of the cloud? I don’t think so. Year of cloud confusion sounds more like it.

If you’re putting together your travel calendar for the New Year, four events you should include are:

  • April 17-20 The Experts Conference, Las Vegas. It’s the 10th anniversary TEC, bigger and better than ever. A “must do” for Active Directory and other Microsoft identity professionals.
  • May 3-5 Internet Identity Conference, Mountain View, California. The place for those interested in user-centric identity, social networking identity and digital privacy issues.
  • May 10-13 The European Identity Conference in Munich. This is fast becoming not only Europe’s premier IdM conference, but one of the world’s best. It’s also a “two-fer” as it coincides with CLOUD 2011.
  • July 18-21 The Cloud Identity Summit, in Keystone, Colorado. We can all get together and decide if this “cloud thing” will ever go anywhere.


Manu Cohen-Yashar reported Hyper-V Cannot run on Intel Q8xxx on 12/29/2010:

image To work with Azure VM Role I wanted to install Server 2008 with Hyper-V on my Desktop machine at home.

I have a great machine - Intel Core 2 Quad Q8200 CPU, a lot of disks, a lot of RAM and a lot of quiet cooling fans…a neat home server.

image

The installation of Windows 2008 Server went smoothly and the performance was just stunning…until it was time setup Hyper-V. To my surprise a error message popped up telling me that the hardware requirements wasn’t fulfilled which in my opinion was strange. A quick check on Hyper-V system requirements does show some specific CPU requirements:

  • x64 CPU
  • Hardware-assisted virtualization (Intel VT for Intel processors and AMD-V for AMD processors)
  • Hardware Data Execution Protection (DEP) must be available and enabled (Intel XD bit and AMD NX bit)

image A check on Intel’s Core 2 Quad processor overview page shows that all quad processors support the VT-technology and DEP except the Q8XXX-series (Q8200, Q8200S and Q8300) how ironic.

There is no happy end. I have to get a new processor.

Hyper-V runs great on my test machine that has an Intel DQ45CB motherboard with an Intel Core 2 Quad 2.83 GHz Q9550 Yorkfield processor and 8 GB RAM.


Darren Cunningham posted 2011 Cloud Integration Predictions: Hybrid Platform Adoption, Private Cloud Confusion to Informatica’s Perspectives blog on 12/29/2010:

Darren Cunningham

In my last post I scored my 2010 cloud integration predictions. I’d say it was a solid B. While the spate of acquisitions in the market helped shine a light on the opportunity for data integration delivered as an on-demand service, I’d characterize 2010 as a year of line of business-driven cloud integration adoption. My prediction is that 2011 will be the year of “hybrid everything” and IT organizations will be reorganizing and realigning to adapt to the new realities of cloud computing. Here are my 2011 cloud integration predictions.

Cloud Adoption Drives Two-Tier Cloud Integration Strategies

In 2011, enterprise IT organizations will adopt a two-tier cloud integration strategy that is aligned with their corporate data integration standard.  Many less complex, often point-to-point, data integration use cases (migration, synchronization, replication, cleansing, etc.) will be managed directly by cloud application administrators, analysts and operations users who need easy-to-use, agile tools and templates.  These cloud-based integration services must provide interoperability with the corporate standard in order to eliminate “rogue” approaches (including hand coding) that are unable to scale and grow with the business. The right two-tier cloud integration strategy is not something that tier-two technologies will be able to support and manage.

LOB-Driven Cloud Integration Projects Lead to Strategic MDM Initiatives

The majority of cloud data integration implementations in the enterprise have been what my friends at Appirio refer to as “cloud to ground,” or cloud application to on-premise application, database or file integration. A primary use case is customer, product, price book, etc. master data synchronization between systems (for example, CRM and ERP). In 2011, more and more enterprise sales and marketing operations mangers will be looking to go beyond CRM “endpoint” integration, working with IT to identify equally flexible approaches to Master Data Management (MDM) that will leverage their current cloud integration investments.

The Rise of the Cloud Integration Platform

I agree with what  Alex Williams of ReadWriteWeb refers to as a “proliferation of APIs” in 2011. The result will be an increased need for cloud-based platforms that can handle process-centric, real-time and batch data integration requirements. An increasing number of software and data as a service (SaaS and DaaS) providers will opt to embed cloud integration platforms into their solutions instead of attempting to “roll their own”. A key requirement for the cloud integration platform will be support for hybrid deployments (see Chris Boorman’s predictions from the world of data integration). By Dreamforce 2011, there will also be much more discussion about MDM in the cloud and the role it will play in a hybrid cloud integration platform.

Enterprise Database.com Adoption

Speaking of Dreamforce, this year salesforce.com introduced Database.com (see my post about it here). Initial adoption will likely be driven by mobile and cloud application developers, but by the end of 2011 we’ll see significant enterprise Database.com adoption. I’m not suggesting we’ll see massive “rip and replace,” but I do believe that departments and divisions in particular will understand and take advantage of the benefits of Database.com versus resource-intensive, on-premise, often open-source alternatives. Of course, Database.com migration will require easy-to-use cloud-based integration services that will also be able to provide the ability to keep data synchronized between systems.

Private Cloud Confusion Continues

When it comes to infrastructure-as–a-service (both public and private) I don’t see things getting, uh, less cloudy (pun intended) in 2011. In a post about Larry Ellison’s Elastic Cloud and Marc Benioff’s False Cloud, Forrester’s Stefan Ried outlined how “private cloud differentiates itself from a traditional but modern and virtualized data center.”  For most, this distinction will remain unclear in 2011. In fact, I predict that we’ll see even more terminology confusion and “cloud washing” in anticipation of what Chirag Mehta and  R “Ray” Wang call “cloud mega stacks,” which won’t become a reality until 2012. [Emphasis added.]

Those are my predictions. Think I’ll score an A next year? I hope all of your clouds will be connected in 2012. Happy New Year!

Darren is the VP of Marketing for Informatica Cloud, which explains his rather self-serving “Enterprise Database.com Adoption” prediction. Informatica is “extending Informatica Cloud to make it even easier to migrate any legacy database – the schema AND the data – to the cloud, with Database.com.” For my initial take on Database.com, check out Preliminary Comparison of Database.com and SQL Azure Features and Capabilities of 12/10/2010


See IDCTechTalk uploaded IDC's Frank Gens and 2011 Predictions for the Cloud as a 00:05:29 YouTube video on 12/14/2010 in the Windows Azure Infrastructure section above for Gens’ take on private clouds.


<Return to section navigation list> 

Cloud Security and Governance

image

No significant articles today.


<Return to section navigation list> 

Cloud Computing Events

Mike Benkovich and Brian Prince will present a two day MSDN Webcast: Windows Azure Boot Camp: Working with Messaging and Queues (Level 200) on 1/4/2011 and 1/5/2011(?):

Event ID: 1032470877
  • Language(s): English.
  • Product(s): Windows Azure.
  • Audience(s): Pro Dev/Programmer.
Event Overview

Get in line for this webcast to learn about queues, messaging, and how Windows Azure takes advantage queues and messaging to enable some very powerful scenarios when it comes to scalability and distribution of work. We start with the basics of how queues and messaging work and then take a deep dive into how you can customize the way queues and messaging behave.
Technology is changing rapidly, and nothing is more exciting than what's happening with cloud computing. Join us for this webcast series, and get up to speed on developing for Windows Azure, the broad-based Microsoft business solution that helps you meet market demands. This series brings the in-person, two-day Windows Azure Boot Camp online. Each webcast is a stand-alone resource, but the series gives you a complete picture of how to get started with this platform.

Presenters: Mike Benkovich, Senior Developer Evangelist, Microsoft Corporation and Brian Prince, Senior Architect Evangelist, Microsoft Corporation
Energy, laughter, and a contagious passion for coding: Mike Benkovich brings it all to the podium. He's been programming since the late '70s when a friend brought a Commodore CPM home for the summer. Mike has worked in a variety of roles including architect, project manager, developer, and technical writer. Mike is a published author with WROX Press and APress Books, writing primarily about getting the most from your Microsoft SQL Server database. Since appearing in Microsoft's DevCast in 1994, Mike has presented technical information at seminars, conferences, and corporate boardrooms across America. This music buff also plays piano, guitar, and saxophone, but not at his MSDN Events. For more information, visit www.BenkoTIPS.com.

Expect Brian Prince to get (in his own words) "super excited" whenever he talks about technology, especially cloud computing, patterns, and practices. Brian is a senior architect evangelist at Microsoft and has more than 13 years of expertise in information technology management. Before joining Microsoft in 2008, Brian was the senior director of technology strategy for a major Midwest Microsoft partner. Brian has exceptional proficiency in the Microsoft .NET framework, service-oriented architecture, building enterprise service buses (ESBs), and both smart client and web-based applications. Brian is the cofounder of the non-profit organization CodeMash (www.codemash.org), and he speaks at various regional and national technology events, such as TechEd. For more information, visit www.brianhprince.com.

Important: The event’s entry page doesn’t list dates and times of the event. The registration page list the dates and times as:

  • Tuesday, January 04, 2011 3:00 AM
  • Wednesday, January 02, 2013 8:00 AM
  • Pacific Time (US & Canada)
  • Duration:60 Minutes

Obviously, Tuesday’s time is wrong (or at least very strange0, and two years a long time to wait for the second session. I’ve posted feedback. Check the site for updated details.


The Windows Azure Team announced on 1/3/2011 a Free Online Event January 12, 2011: Windows Azure and Cloud for Social Game Developers:

image

Don't miss the free online event, "Windows Azure and Cloud for Social Game Developers", January 12, 2011 10:00- 11:00am PST to learn more about building social, stable, and scalable games from Sneaky Games, one of the first game developers to deploy a massive web based game on the Windows Azure Platform.  

image In this free online session, Sneaky Games CEO David Godwin and Lead Server Engineer Mark Bourland will talk about how they have been able to build a successful social games business using leveraging the latest technologies, including cloud computing and Windows Azure.  At the end of the session, the Sneaky Games team will also answer any questions about their experience with Windows Azure and building Flash-based social games backed with Microsoft technology. Earlier this year, Sneaky Games released their top-rated game, Fantasy Kingdoms, on Facebook and Hi5 using Windows Azure for hosting and game services.

Register for the conference here.

Before the webcast, please be sure you have latest version of Microsoft Office Live Meeting 2007.

I’m surprised that Live Meeting 2007 (with a November 2010 update) is the latest version.


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

No significant articles today.


<Return to section navigation list> 

0 comments: