Saturday, June 05, 2010

Windows Azure and Cloud Computing Posts for 6/5/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this daily series.

 
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated in June 2010 for the January 4, 2010 commercial release. 

Azure Blob, Drive, Table and Queue Services

Rinat Abdullin rejects the contention that Azure Queue Messages cannot be larger than 8192 bytes in this 6/4/2010 post:

imageWe all know that Windows Azure Queues are even more limited in size than MSMQ Queues. And the maximum size of a message is 8Kb.

Well, not quite.

If you are using Windows Azure Queue Client Library (Microsoft.WindowsAzure.StorageClient), then it will exhibit an interesting behavior. Take a look at the simple code:

using (var stream = new MemoryStream())
{
  _messageSerializer.Serialize(data, stream);
  if (stream.Position < 8192)
  {
    var bytes = stream.ToArray();
    // we should not get any exception here
    return new CloudQueueMessage(bytes);
  }
  // else save overflowing part to the Azure BLOB
}

If you try to save a byte buffer of size between 6144 and 8192 bytes, then azure queues will throw up an exception:

System.ArgumentException: Messages cannot be larger than 8192 bytes.
  at Microsoft.WindowsAzure.StorageClient.CloudQueueMessage..ctor(Byte[] content)

This does not make a lot of sense, right? What this exception actually means is:

Messages cannot be larger than 6144 bytes

If we look into the original DLLs, reason becomes obvious. Azure Client code does not really check for the byte size, it checks for the size of the message, after applying Base64 Encoding (which adds 4/3 overhead):

if (Convert.ToBase64String(content).Length > MaxMessageSize)
{
    throw new ArgumentException(...);
}

So that's definitely one potential improvement point for the Azure Client Libraries just in the constructor of CloudQueueMessage - making the exception text more clear and dropping the overhead of converting to Base64 just for the sake or performing a check.

PS: While we are at the subject of wishes for Azure, there was another one, that could significantly improve productivity of cloud solutions: MSMQ Azure (this especially applies to the cloud scalable solutions built with messaging and CQRS).

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

Damien White (visoft) created a odata_ruby client library for OData services and posted it to Github on 6/4/2010:

image odata_ruby

The Open Data Protocol (OData) is a fantastic way to query and update data over standard Web technologies. The odata_ruby library acts as a consumer of OData services.

Usage

The API is a work in progress. Notably, you can’t update entities at the moment, nor can changes be bundled (through save_changes).

Adding

Adding is just as simple. When you point at a service, an AddTo<EntityName> method is created for you. To add a new category for example, you would simply do the following:

        require 'lib/odata_ruby'

        svc = OData::Service.new 
"http://127.0.0.1:8888/SampleService/Entities.svc" new_category = Category.new new_category.Name = "Sample Category" svc.AddToCategories(new_category) category = svc.save_changes puts category.to_json
Deleting

Deleting is another function that involves the save_changes method (to commit the change back to the server). In this example, we’ll add a category and then delete it.

        require 'lib/odata_ruby'

        svc = OData::Service.new 
"http://127.0.0.1:8888/SampleService/Entities.svc" new_category = Category.new new_category.Name = "Sample Category" svc.AddToCategories(new_category) category = svc.save_changes puts category.to_json svc.delete_object(category) result = svc.save_changes puts "Was the category deleted? #{result}"
Querying

Querying is easy, for example to pull all the categories from the SampleService, you simply can run:

        require 'lib/odata_ruby'

        svc = OData::Service.new 
"http://127.0.0.1:8888/SampleService/Entities.svc" svc.Categories categories = svc.execute puts categories.to_json

You can also expand and add filters to the query before executing it. For example:

Expanding
        # Without expanding the query
        svc.Products(1)
        prod1 = svc.execute
        puts "Without expanding the query"
        puts "#{prod1.to_json}\n"

        # With expanding the query
        svc.Products(1).expand('Category')
        prod1 = svc.execute
        puts "Without expanding the query"
        puts "#{prod1.to_json}\n"
Filtering
        # You can access by ID (but that isn't is a filter)
        # The syntax is just svc.ENTITYNAME(ID) which is shown 
# in the expanding examples above svc.Products.filter("Name eq 'Product 2'") prod = svc.execute puts "Filtering on Name eq 'Product 2'" puts "#{prod.to_json}"
Combining Expanding and Filtering
        svc.Products.filter("Name eq 'Product 2'").expand("Category")
        prod = svc.execute
        puts "Filtering on Name eq 'Product 2' and expanding"
        puts "#{prod.to_json}"

Damien continues with testing details.

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

No significant articles today.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Dan Fay’s Stanford Students Take to the Cloud post of 6/4/2010 to Microsoft Research’s External Research Team blog reports:

image In conducting research, we often look to the past for answers. Today, at Stanford University, I had the opportunity to look to the future. This morning I watched four excellent presentations delivered by the teams of students enrolled in CS210, Project-Based Computer Science Innovation & Development; and this afternoon, I attended the class' fair, modeled after a trade show, where I was able to delve more deeply into each of the projects. If the inquisitiveness, passion and determination of the students I met today are any indication, the future of our profession is in very good hands.

The goal of CS210 is to provide computer science students with an opportunity to collaborate on a real-world project provided by a corporate partner. The challenge of the project Microsoft External Research handed over to the students was to make satellite data more accessible to environmental scientists. Specifically, Team Nimbus was tasked with reducing the costs, time and complexity associated with managing satellite images while at the same time improving the reliability of those images, which are often difficult to manipulate on a desktop.

image The result of the team's work is CloudLab, which utilizes the Windows Azure platform to remove the heaviest work from the desktop and put it in the cloud, where there is far more computing power and accessibility. During the development of CloudLab, scientists at the Lawrence Berkeley National Laboratory served as the team's customers. For the students in the class, which was taught by Jay Borenstein, the benefits go far beyond a passing grade. Throughout the class, students gained practical insight into many applied aspects of computer science, such as source control and agile programming methodologies. By working on a real project with the potential to have an impact on industry, the students became better informed about what they may wish to pursue professionally.

Beyond the experience gained by the students, Stanford will use its up-close view of what's important throughout the industry to continue refining its academic offerings. For me, this collaboration effort provided the chance to get to know people whose names I'm confident will one day be familiar to us all. Finally, and most importantly, the experience is a compelling reminder, for all of us throughout the global research community, of how important it is to look at our work and all of its challenges through the perspectives of others as often as possible.

Dan is director, Earth, Energy, and Environment, Microsoft External Research

tbtechnet reported Code Project: Create a State/Local Govt. Windows Azure Application on 6/2/2010:

HomeThe Code Project have kicked off another one of their Windows Azure Platform contests.*

Looks like a great way to promote your applications and possibly get some great PR.

Create a State and Local Government App on Windows Azure and Win a $1000 Intel® i7 laptop!

Here are some nice examples:

It certainly is a timely contest as it seems governments are trying to encourage innovation and help developers get their ideas off the ground:

Bloomberg Businessweek Gov 2.0: The Next Internet Boom

*This challenge is from The Code Project, an independent third-party company not affiliated with Microsoft, and is not a Microsoft offering.

The Windows Azure Team recently posted new “How Do I” videos to the Windows Azure Platform’s Learn site. Here are four:

Each page has links to six other related “How Do I” videos.

Return to section navigation list> 

Windows Azure Infrastructure

James Hamilton emphasizes “speed of execution” in his The New World Order post of 6/5/2010:

image Industry trends come and go. The ones that stay with us and have lasting impact are those that fundamentally change the cost equation. Public clouds clearly pass this test. The potential savings approach 10x and, in cost sensitive industry, those that move to the cloud fastest will have a substantial cost advantage over those that don’t.

And, as much as I like saving money, the much more important game changer is speed of execution. Those companies depending upon public clouds will noticeably more nimble. Project approval to delivery times fall dramatically when there is no capital expense to be approved. When the financial risk of new projects is small, riskier projects can be tried. The pace of innovation increases. Companies where innovation is tied the financial approval cycle and the hardware ordering to install lag are at a fundamental disadvantage.

Clouds change companies for the better, clouds drive down costs, and clouds change the competitive landscape in industries. We have started what will be an exciting decade.

Earlier today I ran across a good article by Rodrigo Flores, CTO of newScale. In this article, Rodrigo says;

First, give up the fight: Enable the safe, controlled use of public clouds. There’s plenty of anecdotal and survey data indicating the use of public clouds by developers is large. A newScale informal poll in April found that about 40% of enterprises are using clouds – rogue, uncontrolled, under the covers, maybe. But they are using public clouds.

The move to the cloud is happening now. He also predicts:

IT operations groups are going to be increasingly evaluated against the service and customer satisfaction levels provided by public clouds. One day soon, the CFO may walk into the data center and ask, “What is the cost per hour for internal infrastructure, how do IT operations costs compare to public clouds, and which service levels do IT operations provide?” That day will happen this year.

This is a super important point. It was previously nearly impossible to know what it would cost to bring an application up and host it for its operational life. There was no credible alternative to hosting the application internally. Now, with care and some work, a comparison is possible and I expect that comparison to be made many times this year. This comparison won’t always be made accurately but the question will be asked and every company now has access to the data to be able to credibly make the comparison.

I particularly like his point that self service is much better than “good service”. Folks really don’t want to waste time calling service personal no matter how well trained those folks are. Customers just want to get their jobs done with as little friction as possible. Less phone calls are good.

Think like an ATM: Embrace self-service immediately. Bank tellers may be lovely people, but most consumers prefer ATMs for standard transactions. The same applies to clouds. The ability by the customer to get his or her own resources without an onerous process is critical.

Self service is cheaper, faster, and less frustrating for all involved. I’ve seen considerable confusion on this point. Many people tell me that customers want to be called on by sales representatives and they want the human interaction from the customer service team. To me, it just sounds like living in the past. These are old, slow, and inefficient models.

Public clouds are the new world order. Read the full article at: The Competitive Threat of Public Clouds.

Reuven Cohen’s The Cloud Computing Opportunity by the Numbers post of 6/5/2010 is a laundry list of numeric projections for cloud-related computing activities and markets:

image How big is the opportunity for cloud computing? A question asked at pretty well every IT conference these days. Whatever the number, it's a big one. Let's break down the opportunity by the numbers available today.

  • By 2011 Merrill Lynch says the cloud computing market will reach $160 billion. The number of physical servers in the World today: 50 million.
  • By 2013, approximately 60 percent of server workloads will be virtualized
  • By 2013 10 percent of the total number of physical servers sold will be virtualized with an average of 10 VM's per physical server sold. At 10 VM's per physical host that means about 80-100 million virtual machines are being created per year or 273,972 per day or 11,375 per hour.
  • 50 percent of the 8 million servers sold every year end up in data centers, according to a BusinessWeek report

The data centers of the dot-com era consumed 1 or 2 megawatts. Today data center facilities require 20 megawatts are common, - 10 times as much as a decade ago.

Google currently controls 2% of all servers or about 1 million servers with it saying it plans to have upwards of 10 million servers ( 107 machines) in the next 10 years. 98% of the market is controlled by everyone else.

Hosting / Data center providers by top 5 regions around the world: 33,157
Top 5 break down

  • USA: 23,656
  • Canada: 2,740
  • United Kingdom: 2,660
  • Germany: 2,371
  • Netherlands: 1,730

According to IDC, the market for private enterprise "Cloud servers will grow from an $8.4 billion opportunity in 2010, representing over 600,000 units, to a $12.6 billion market in 2014, with over 1.3 million units.

  • Market opporunity based purely on server count. $160 billion dollars divided by 50 million servers = $3,200 per server.
  • The amount of digital information increased by 73 percent in 2008 to an estimated 487 billion gigabytes, according to IDC.
  • World Population 2009: 6,767,805,208
  • Internet Users 2000: 360,985,492
  • Internet Users 2009: 1,802,330,457
  • Overall Internet User Growth: 399.3%

Fastest Growth Markets (Last 10 years)

  • Africa +1,809.8%
  • Middle East +1,675%
  • Latin America +934.5%
  • Asia +568.8%

Slowest Growth Markets

  • North America +140.1%


Cloud value by world population: $23.64 per person; cloud value by Global Internet population: $88.77 per person.

Update:

Conclusions:
Based on these numbers, a few things are clear. First server virtualization has lowered the capital expenditure required for deploying applications, but the operational costs have gone up significantly more than the capital cost savings making the operational long tail the costliest part of running servers.

Although Google controls 2 percent of the global supply of servers, the remaining 98 percent is where the real opportunities are both in private enterprise data centers as well as in 40,000+ public hosting companies.

This year 80-100 million virtual machine will be created, the traditional management approaches to infrastructure will break. Infrastructure automation is becoming a central part of any model data center. Providing infrastructure as a service will not be a nice to have but will be a requirement. Hosters, Enterprises and small business will need to start running existing servers in a cloud context or face in-efficiency which may limit potential growth.
Surging demand for data and information creation will force a migration to both public and private clouds specially in emerging markets such as Africa and Latin America.

Lastly, there is a tonne of money to be made.

The Microsoft Case Studies Team published on 6/4/2010 a four-page blurb about the Tribune Company’s use of Windows Azure to support a content management system:

image Tribune Company, a giant in the traditional media industry, needed to adapt its business to thrive in a changing market. Specifically, it wanted to make it possible for consumers to choose the content most relevant to them. Tribune quickly centralized the content in its many data centers into a single repository using cloud computing on the Windows Azure platform .

Journalists and editors now have a single source for submitting and retrieving content, and the company can provide consumers with targeted content through online, mobile, and traditional distribution methods. Furthermore, Tribune experienced cost savings, a fast time-to-market, and a reduced IT management burden with Windows Azure . The company believes the platform’s fully scalable nature to be critical in expanding its revenue opportunities as it transforms how it delivers news.

Bernard Golden explains How Cloud Computing Can Transform Business in this 6/4/2010 article for the Harvard Business Review:

image You're in a meeting. You and your team identify a great new business opportunity. If you can launch in 60 days, a rich new market segment will be open for your product or service. The action plan is developed. Everything's a go.

And then you come down to earth. You need new computer equipment, which takes weeks, or months, to install. You also need new software, which adds more weeks or months. There's no way to meet the timeframe required by the market opening. You are stymied by your organization's lack of IT agility.

Or, you could have the experience the New York Times had when it needed to convert a large number of digital files to a format suitable to serve up over the web. After the inevitable "it will take a lot of time and money to do this project," one of their engineers went to the Amazon Web Services cloud, created 20 compute instances (essentially, virtual servers), uploaded the files, and converted them all over the course of one weekend.

Total cost? $240.

This example provides a sense of why cloud computing is transforming the face of IT, with the potential to deliver real business value. The rapid availability of compute resources in a cloud computing environment enables business agility — the dexterity for businesses to quickly respond to changing business conditions with IT-enabled offerings.

Notwithstanding the fact that IT seems to always have the latest, greatest thing on its mind, cloud computing has the entire IT industry excited, with companies such as IBM, Microsoft, Amazon, Google and others investing billions of dollars in this new form of computing. And in terms of IT users, Gartner recently named cloud computing as the second most important technology focus area for 2010.

Bernie goes on with the requisite “What is cloud computing?” topic and concludes:

It's a cliché to say that business is changing at an ever-increasing pace, but one of the facts about clichés is they often contain truth. The deliberate pace of traditional IT is just not suited for today's hectic business environment. Cloud computing's agility is a much better match for constantly mutating business conditions. To evaluate whether your business opportunities could be well-served by leveraging the agility of cloud computing, download the HyperStratus Cloud Computing Agility Checklist, which outlines ten conditions that indicate a business case for taking advantage of the agility of cloud computing.

Well, if it’s in the Harvard Business Review, it must be true.

Owen Garrett promises to show you “How to find the best cloud computing services for your business” in his Keeping Cloud Costs Grounded article of 6/4/2010 for Forbes.com:

image The cost of cloud computing has generated little debate because the savings appear so self evident. IBM's CTO for Cloud Computing, Kristof Kloeckner, estimates that it reduces IT labor costs by up to 50%, improves capital utilization by 75% and reduces provisioning from weeks to minutes. The City of Los Angeles anticipates savings of more than $5 million with its move to Google Apps. Because of such apparent savings, few companies have taken the time to question the cost implications of working in the cloud.

The problem with this is that cloud computing takes on many forms, and, if not planned for properly, will not deliver the expected ROI. As Andy Mulholland, global CTO at Capgemini has said, "Relatively speaking, [cloud computing] is unstoppable. The question is whether you'll crash into it or migrate into it."

Similar to implementing any new technology, understanding the key business needs and the technology's role in supporting it are critical before leveraging the cloud. Unexpected high costs due to poor planning can negatively impact take-up of further cloud initiatives, so companies need to put in the appropriate upfront time conducting research. Without across-the-board involvement, the cloud could end up costing more than you think.

Identify The Right Type Of Cloud
Every cloud service and cloud architecture has different capabilities, so it's important to determine which ones best meet your business objectives. Clouds may be hosted in-house ("private"), outsourced ("public") or a blend of the two ("hybrid") giving you varying degrees of control and cost. Clouds also deliver IT resources "as-a-service," whether it's via SaaS (software-as-a-service), IaaS (infrastructure-as-a-service) or less commonly, PaaS (platform-as-a-service).

An organization with a large internal IT estate may wish to repurpose some of this to create a private infrastructure cloud--a sound way to increase utilization of existing assets and consider the internal economics of providing IT as a service. At the other end of the spectrum, an organization with a mobile workforce that makes heavy use of business applications may find that selecting public SaaS over in-house services offers improved productivity, as well as cost savings. Each business, no matter its size, will need to determine which cloud technologies will serve them best. …

Owen, who’s chief innovation officer of Zeus Technology, a Web infrastructure software company, continues with a Pricing Models And Vendor Lock-In topic. Since when is “chief innovation officer” a candidate for CxO status?

<Return to section navigation list> 

Cloud Security and Governance

See the Cloud Security Alliance (CSA) announced Cloud Security Alliance hosts Cloud Security Alliance Summit at Black Hat on 7/28 and 7/29/2010 post in the Cloud Computing Events section below.

Andrew Marshall, Michael Howard, Grant Bugher and Brian Harden co-wrote Microsoft’s 28-page Security Best Practices For Developing Windows Azure Applications white paper dated May 2010. From the front matter:

Executive Summary

imageAs businesses seek to cost-effectively consume IT services, interest is growing in moving computation and storage from on-premise equipment to Internet-based systems, often referred to as “the cloud."

Cloud computing is not restricted to large enterprises; small companies benefit greatly from moving computing and storage resources to systems such as Windows Azure. In fact, smaller companies are adopting this new paradigm faster than larger companies[1].

The idea that purchasing services from a cloud service provider may allow businesses to save money while they focus on their core business is an enticing proposition. Many analysts view the emerging possibilities for pricing and delivering services online as disruptive to market conditions. Market studies and the ensuing dialogue among prospective customers and service providers reveal some consistent themes and potential barriers to the rapid adoption of cloud services. Business decision makers want to know, for example, how to address key issues of security, privacy and reliability in the Microsoft Cloud Computing environment, and they are concerned as well about the implications of cloud services for their risk and operations decisions.

This paper focuses on the security challenges and recommended approaches to design and develop more secure applications for Microsoft’s Windows Azure platform. Microsoft Security Engineering Center (MSEC) and Microsoft’s Online Services Security & Compliance (OSSC) team have partnered with the Windows Azure team to build on the same security principles and processes that Microsoft has developed through years of experience managing security risks in traditional development and operating environments.

Intended Audience

This paper is intended to be a resource for technical software audiences: software designers, architects, developers and testers who design, build and deploy more secure Windows Azure solutions.

This paper is organized into two sections[2]:

Overview of Windows Azure security-related platform services; and

  1. Best practices for secure design, development and deployment:
  2. Service-layer/application security considerations:
  • Protections provided by the Azure platform and underlying network infrastructure.
  • Sample design patterns for hardened/reduced-privilege services.

[1] “Cloud Computing: Small Companies Take Flight” BusinessWeek http://www.businessweek.com/technology/content/aug2008/tc2008083_619516.htm

[2] The distinction between “security features” and “secure features” is an important one. “Security features” are the technologies, such as authentication or encryption that can help protect a system and its data. “Secure features” are technologies that are resilient to attack, such as encryption key storage and management or code with no known vulnerabilities

<Return to section navigation list> 

Cloud Computing Events

Bill Wilder posted 0n 6/5/2010 links to his Two Azure Talks at New Hampshire Code Camp:

Today gave two talks at the New Hampshire Code Camp 2 in Concord, NH.

imageMy talks were Azure Demystified – What is Cloud Computing? What is Windows Azure? and Why should we care? followed by Two Roles and a Queue – The most important design pattern for Windows Azure Cloud apps.

The PowerPoint slides are available right here:

Also plugged the Boston Azure User Group to those attending my talks! Hope to see some of you at NERD in Cambridge, MA for talks and hands-on-coding sessions. Details always at bostonazure.org.

The Cloud Security Alliance (CSA) announced Cloud Security Alliance hosts Cloud Security Alliance Summit at Black Hat on 7/28 and 7/29/2010:

The Cloud Security Alliance will be hosting the second "Cloud Security Alliance Summit at Black Hat". Following the sold-out CSA Summit held at the 2010 RSA Conference, the CSA Summit at Black Hat will be presented as a half-day session concurrent with the popular Black Hat Briefings. The CSA faculty will consist of veteran Black Hat presenters focused on cloud issues. Highlights of the Summit include:

  • A keynote presentation from noted cloud security expert and CSA founding member, Christopher Hoff (@Beaker) [Emphasis and Twitter ID added.]
  • Cloud threat case studies
  • Secure software development in the cloud
  • Hypervisor: past, present and future
  • Hacking identity in the cloud
  • New research announcements from CSA

The CSA Summit is open to all Black Hat Briefings registrants, which is July 28-29, 2010 at Caesar's Palace, Las Vegas

CSA members interested in attending the CSA Summit at Black Hat are being offered an exclusive 20% discount for the event. Visit the CSA LinkedIn group to access the discount code.

About Cloud Security Alliance

The Cloud Security Alliance is a not-for-profit organization with a mission to promote the use of best practices for providing security assurance within Cloud Computing, and to provide education on the uses of Cloud Computing to help secure all other forms of computing. The Cloud Security Alliance is led by a broad coalition of industry practitioners, corporations, associations and other key stakeholders. For further information, the Cloud Security Alliance Web site is www.cloudsecurityalliance.org.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Maxwell Cooter reports “Elastic Hosts offers 100% SLA as inducement to customers” in his Cloud computing company opens doors in the US post of 6/4/2010 to NetworkWorld’s Infrastructure Management blog:

image UK hosting company Elastic Hosts has opened up a cloud computing facility in the US, claiming to be only the second company to offer cloud services on both sides of the Atlantic.

The company has opened a facility in San Antonio and is set to take on Amazon as offering services to UK and US customers. Elastic Hosts offers on-demand capacity billed by the hour rather than a fixed term.

The 2009 Handbook of Application Delivery: Download now

Elastic Hosts CEO, Richard Davies, said that the company has been asked by its own European customers to open up a facility in the US and thought that the time was no ripe.

He said that the company offered a different service from Amazon. "We handle things differently from the way that Amazon does things. We aim to provide a service to our customers allowing them to configure a cloud solution the same way that they configure a physical machine. We also aim to make things as simple as possible. For example, take the way that we handle static IP addresses - if someone wants static IP address, we can give them true static. Amazon has doesn't do that, it offers a form of network address translation - it comes to the same thing but our offering is much less complicated."

Davies said that Elastic Hosts uses a modern platform, another area where he believes that the service scores over Amazon.. "We use technology that wasn't available when Amazon built its system some years ago."

The other distinguishing factor, said Davies, is that the company offer[s] a 100 percent SLA. "We back by this by saying if you experience, we will give you a 100x credit - that is an extremely strong commitment."

“100x credit” sounds different to me than “100 percent SLA;” that would be a “10,000 percent SLA.”

It’s interesting that Elastic Hosts chose the same city as Microsoft’s South Central US data center: San Antonio. The city might be offering financial incentives for building data centers there.

Juan Carlos Perez claims “Google promises improvements to the service but will waive fees retroactively until the problems are solved” in his Update: Google App Engine's datastore falters under demand post of 6/3/2010 to InfoWorld’s Cloud Computing blog:

imageTwo weeks after announcing a business version of its Google App Engine application building and hosting service, Google is acknowledging that the performance of the product's datastore has been chronically deficient for weeks.

To make up for the recent string of outages, slowdowns, and errors, Google is waiving datastore CPU costs retroactively effective to the May 31 bill and until further notice.

The datastore problems, which have rippled out to other App Engine components, have been caused by the platform's growth, which has outpaced server capacity, Google said in a blog post on Wednesday.

"There are a lot of different reasons for the problems over the last few weeks, but at the root of all of them is ultimately growing pains. Our service has grown 25 percent every two months for the past six months," the blog post reads.

Jason Spitkoski, co-founder and director of Schedule Bin, a Web application that lets employers create and manage employee work schedules, has been using App Engine for two years, and started seeing performance issues in the past couple of months, with a marked deterioration in the past two weeks.

Because Schedule Bin is still in beta, customers have been understanding about the performance problems, but Spitkoski has been left red-faced in front of prospective customers.

"It is more of an issue when we approach new customers, explain the benefits of using a cloud and then show them the product, only to find slow performance and having to improvise our pitch to avoid the awkward silence that looms as the app slowly interacts with the cloud," he said.

Google is scrambling to build up the service's infrastructure to stamp out the issue, but performance is expected to remain rocky for the next two weeks, Google said.

"I was a bit surprised with how long it took them to address the current performance issues," Spitkoski said. "For a company with so much data and information, I would have expected them to be more proactive."

The situation is ironic because Google App Engine is a cloud-based application development and hosting platform created so that developers could focus on building applications without having to worry about garden-variety computing issues, such as server problems.

As with other cloud services, App Engine's selling point is that the vendor, in this case Google, is better equipped to handle IT infrastructure tasks than most, if not all, potential clients, and thus should be entrusted with handling tasks like hardware provisioning and software maintenance.

If you need more proof that everyone wants to get into the cloud computing game, in this case SaaS and IaaS, read Arrow Electronics, Inc.’ Arrow ECS Unveils 'Arrow Fusion,' Launches Cloud Services Line press release of 6/2/2010:

image Building on its commitment to the services market, Arrow Enterprise Computing Solutions, a business segment of Arrow Electronics Inc. [NYSE: ARW], unveiled a new brand identity for its North American professional services business unit and launched a line of cloud computing services under that business unit.

Arrow FusionSM is the new name for Arrow ECS' professional services business. The new name reflects the distributor's focus on bringing together multiple services offerings and working closely with resellers to augment their services capabilities.

In addition to the professional services offerings that were announced in March, the Arrow Fusion Professional Services business unit now includes cloud services such as solutions for data center monitoring and management; security- and software-as-a-service (SaaS); infrastructure-as-a-service (IaaS); and business continuity and disaster recovery.

"Arrow ECS is committed to assisting resellers with providing comprehensive solutions to support their customers' needs. As the cloud computing market continues to grow, resellers can rely on Arrow ECS to connect them to industry-leading services so they can continue to meet the demands of the market," said Joe Burke, vice president of worldwide services for Arrow ECS. "Arrow Fusion cloud services address many of the data center needs that companies are most concerned about today and are powered by leading technology providers."

The Arrow Fusion remote monitoring and management solutions provide detailed reporting on the performance of technology in the data center, including systems, applications, servers and databases. A variety of service-level packages are available depending on the customer's needs, such as basic monitoring and performance reporting, base-level remediation and full remediation support.

Arrow Fusion SaaS solutions include security services for e-mail and web defense, e-mail disaster recovery and message archiving. SaaS solutions also include software services for subscription-based access to Microsoft-hosted Exchange and SharePoint, contact relationship management and business productivity tools.

IaaS services include solutions for virtual, managed and fully-dedicated private servers that allow more computing power and access to servers, as well as storage and bandwidth control, depending on the needs of the organization.

Arrow Fusion business continuity and disaster recovery services include solutions for offsite data backup and recovery protection.

It remains to be seen if Arrow Fusion is competitive with the current IaaS offerings.

<Return to section navigation list> 

blog comments powered by Disqus