Sunday, January 17, 2010

Windows Azure and Cloud Computing Posts for 1/15/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this weekly series.

 
• Update 1/17/2010: Joe McKendrick: Second Annual SOA-Cloud QCamp Announced!; James Hamilton: Private Clouds Are Not The Future; Reuven Cohen: CloudCamp Haiti (Fundraiser) Jan 20, 2010; eWeek: Database: Six Strategies Database Administrators Need to Know for 2010; Robert Westervelt: Researchers say search, seizure protection may not apply to SaaS data; James Urquhart: Does the Fourth Amendment cover 'the cloud'?;

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts, Databases, and DataHubs*”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated for the November CTP in January 2010. 
* Content for managing DataHubs will be added as Microsoft releases more details on data synchronization services for SQL Azure and Windows Azure.

Off-Topic: OakLeaf Blog Joins Technorati’s “Top 100 InfoTech” List on 10/24/2009.

Azure Blob, Table and Queue Services

Joannes Vermorel recommends Fat entities for Table Storage in Lokad.Cloud in this 1/15/2010 post:

After realizing the value of the Table Storage, giving a lot of thoughts about higher level abstractions, and stumbling upon a lot of gotcha, I have finally end-up with what I believe to be a decent abstraction for the Table Storage.

The purpose of this post is to outline the strategy adopted for this abstraction which is now part of Lokad.Cloud. [Emphasis Joannes’.]

Table Storage (TS) comes with an ADO.NET provider part of the StorageClient library. Although I think that TS itself is a great addition to Windows Azure, frankly, I am disappointed by the quality of the table client library. It looks like an half backed prototype, far from what I typically expect from a v1.0 library produced by Microsoft.

Paul Arundel’s Inserting an Entity into Azure Table Storage from BizTalk post of 1/15/2010 begins:

Following on from my previous post where I used a custom WCF adapter to insert a BizTalk message into Azure blob storage I wanted to try the same but with Azure Table storage. This would of course have been more meaningful if it had followed immediately after my previous post and not three months after. I blame Christmas.

An Azure table is not like the familiar relational tables we get with SQL Server. If you want one of those in the cloud try SQL Azure. Azure tables are schema-less and designed for scalability. As an individual table has no schema you can insert different entity types with different properties into the same table and it won't mind a bit. This allows you to store types that are naturally retrieved at the same time (say an Order and it's Order items) in the same table and optimize your queries appropriately. This is partly achieved by each table entity having two keys, a partition key and a row key, which jointly make up an entity's unique key. The partition key plays the important role in that entities with the same partition key will be stored consecutively on the same storage node, therefore avoiding queries having to scan multiple storage nodes for entities with a particular partition key. This is, at least, how I understand it. (This is also possibly irrelevant unless you are dealing with a lot of entity rows).

<Return to section navigation list> 

SQL Azure Database (SADB, formerly SDS and SSDS)

• eWeek: Database: Six Strategies Database Administrators Need to Know for 2010:

The amount of data that enterprises have to store has been expanding, and 2010 promises no reversal of that trend. For IT managers, the challenge of dealing with so much data is not going away. With that in mind, eWEEK spoke to a number of analysts about what database administrators and the companies they work for should be thinking about in 2010. Here is what eWEEK found.

eWeek-SixDatabaseStrategiesFor2010-644px

 Farzhad Banifatemmi presents a Series of Discussion on Project Management and SQL Azure or Cloud Computing Part 1 on 1/15/2010.

<Return to section navigation list> 

AppFabric: Access Control, Service Bus and Workflow

Ellen Rubin asserts “Advances in federation are good news for companies considering a move to the cloud since deployments no longer need to be custom” in her Cloud Federation and the Intercloud post of 1/16/2010:

Last week’s post explored federation in the cloud, allowing enterprises to move workloads seamlessly across internal and external clouds according to business and application requirements. Advances in federation are good news for companies considering a move to the cloud since deployments no longer need to be custom projects and applications no longer have to be tightly coupled to a particular cloud.

To follow up, there’s been lots of discussion recently about the concept of the “Intercloud,” a direction for cloud computing that is closely related to federation and ties in with much of our work at CloudSwitch. A term introduced by Cisco, the Intercloud refers to a mesh of clouds that are interconnected based on open standards to provide a universal environment for cloud computing. Like the name suggests, it’s similar to the Internet model, where everything is federated in a ubiquitous, multiple-provider infrastructure.

Robert Zhu’s Configuring Active Directory Federation Services 2.0 post of 1/15/2010 begins:

Active Directory Federation Services (AD FS) 2.0 makes it possible to deploy a federation server and begin issuing tokens quickly by following these steps:

1) AD FS 2.0 software installation

2) Initial configuration

3) Add a relying party trust

4) Add more federation servers to the farm (Optional)

5) Configure a federation server proxy (Optional)

In this blog post, I’ll discuss Initial Configuration (step 2) in detail. If you are looking for prerequisite information about how to set up and configure a new federation server for the first time, I suggest looking at the topic titled Checklist: Setting Up a Federation Server in the AD FS 2.0 Deployment Guide.

And continues with a fully illustrated description of the configuration process.

Robert Zhu is a Software Design Engineer on the AD FS Team

<Return to section navigation list>

Live Windows Azure Apps, Tools and Test Harnesses

Rob Gillen presents the slides from his CodeMash: Azure – Lessons from the Field session at CodeMesh and includes links to related topics.

John Moore offers props to HealthVault in his Top Ten Predictions for Healthcare IT in 2010 in a 1/14/2010 post to the Chilmark Research blog. John writes in prediction #8:

HealthVault Continues to Put Distance Between Itself and Other Personal Health Platforms (PHP): While Dossia struggles to get its founding members to on-ramp to the Dossia platform (still only Wal-Mart today though have been told 2-3 others should go live in Q1), and Google messes around with Android and Chrome while virtually ignoring Google Health, Microsoft’s HealthVault continues to push ahead becoming the defacto PHP in the market much like Apple’s iPhone is the defacto smartphone today.

Return to section navigation list> 

Windows Azure Infrastructure

• James Hamilton argues Private Clouds Are Not The Future in this 1/17/2010 essay:

Cloud computing is an opportunity to substantially improve the economics of enterprise IT. We really can do more with less.

I firmly believe that enterprise IT is a competitive weapon and, in all industries, the leaders are going to be those that invest deeply in information processing. The best companies in each market segment are going to be information processing experts and because of this investment, are going to know their customer better, will chose their suppliers better, will have deep knowledge and control of their supply chains, and will have an incredibly efficient distribution system. They will do everything better and more efficiently because of their information processing investment. This is the future reality for retail companies, for financial companies, for petroleum exploration, for pharmaceutical, for sports teams, and for logistics companies. No market segment will be spared and, for many, it’s their reality today. Investment in IT is the only way to serve customers and shareholders better than competitors.

It’s clear to me that investing in information technology is the future of all successful companies and it’s the present for most. The good news is that it really can be done more cost effectively, more efficiently, and with less environmental impact using cloud computing. We really can do more with less.

The argument for cloud computing is gaining acceptance industry-wide. But, private clouds are being embraced by some enterprises and analysts as the solution and the right way to improve the economics of enterprise IT infrastructure. Private clouds may feel like a step in the right direction but scale-economics make private clouds far less efficient than real cloud computing. What’s the difference? At scale, in a shared resource fabric, better services can be offered at lower cost with much higher resource utilization. We’ll look at both the cost and resource utilization advantages in more detail below.

James is a Vice President and Distinguished Engineer on the Amazon Web Services team where he is focused on infrastructure efficiency, reliability, and scaling. Prior to AWS, James was architect on the Microsoft Data Center Futures team and before that he was architect on the Live Platform Services team.

Chris Hoff (@Beaker) asserts Cloud: Over Subscription vs. Over Capacity Are Two Different Things in this 1/15/2010 post:

There’s been a very interesting set of discussions lately regarding performance anomalies across Cloud infrastructure providers.  The most recent involves Amazon Web Services and RackSpace Cloud. Let’s focus on the former because it’s the one that has a good deal of analysis and data attached to it.

Reuven Cohen’s post (Oversubscribing the Cloud) summarizing many of these concerns speaks to the meme wherein he points to Alan Williamson’s initial complaints (Has Amazon EC2 become over subscribed?) followed by CloudKick’s very interesting experiments and data (Visual Evidence of Amazon EC2 network issues) and ultimately Rich Miller’s summary including a response from Amazon Web Services (Amazon: We Don’t Have Capacity Issues)

The thing that’s interesting to me in all of this is yet another example of people mixing metaphors, terminology and common operating methodologies as well as choosing to suspend disbelief and the reality distortion field associated with how service providers actually offer service versus marketing it.

Here’s the kicker: over subscription is not the same thing as over capacity. BY DESIGN, modern data/telecommuication (and Cloud) networks are built using an over-subscription model. [Emphasis Beaker’s.]

Reuven Cohen’s Oversubscribing the Cloud post of 1/15/2010 is the topic of Chris Hoff’s preceding essay:

There's been a bit of a debate raging over whether or not Amazon EC2 has been oversubscribed and is suffering from performance problems because it. The discussion started when Alan Williamson wrote a blog post on Tuesday that said he was experiencing growing performance problems while running a large EC2 deployment for one of his customers. The post accused Amazon of oversubscribing their environment which in turn meant he needed to buy larger instances to maintain the same level of performance in turn increasing his client’s costs.

The debate hits at the heart of complexities involved in trying to deploy cost effective, revenue generating, public use infrastructure as a service platforms. I've been saying this for a while -- one of the hardest parts creating a public cloud service is estimating your customers demand while trying to remain competitive, which really means having prices that are on par or better then Amazon EC2.

Geva Perry’s Software Delivery Models in the Era of Cloud Computing post of 1/15/2010 asserts:

Increasingly, software companies are facing a dilemma as to the best delivery model for their business and many are opting for a "multi-delivery" model or multi-modal software delivery. A typical scenario is the vendor will offer the software both for installation on-premise and as an on-demand service. Wordpress and Atlassian were early pioneers of this approach, with the latter offering products such as JIRA, Confluence and Crucible for both download and as a hosted solution. And on the big vendor front, Microsoft is pursuing a Software+Services strategy

Michael Biddick wrote the Research: SaaS Strategy white paper for InformationWeek: Analytics, which released on 1/15/2010 and costs $99.00. Here’s the abstract:

Business technology leaders find themselves in something of a cloud computing deluge, showered by vendor marketing, new services, and even CEO questions about their “cloud strategy.” Much of the exuberance centers on the kind of computing-by-the-hour service that Amazon and others sell but most enterprises are only starting to ponder. Amid the lofty aspirations, few have noticed just how powerful and grounded a force software as a service has become. The impact that SaaS will have on IT organizations is profound, and as business technology leaders, we need to ensure that our companies are ready for it.

While SaaS shifts software deployment and maintenance burdens to the service provider, freeing up resources for other projects, IT is at the mercy of the provider for availability, data security, regulatory compliance, and other key issues. Outages will halt business, and poor response times will hamper productivity. SaaS apps aren’t just a nice-to-have. Three-fourths of companies consider application services extremely or critically important to their organizations, according to our InformationWeek Analytics survey of 281 business technologists.

About one-third of the 131 respondents using SaaS describe their SaaS apps as mission critical.

Despite that importance, too many IT leaders treat SaaS ad hoc. Of those using SaaS, 59% say it’s a tactical point solution, and only 32% consider it part of their long-term strategy. CIOs will get the most from SaaS by making it part of an overall enterprise architecture. We’ll spell out nine key areas an effective SaaS strategy must address, and analyze the key drivers (speed and cost) and barriers (security, understanding, and data ownership) to SaaS use.

Geva Perry provides links to My Top Ten Posts in 2009 in this 1/15/2010 post:

Needless to say 2009 was an exciting year for cloud computing developments. Here are my top ten blog posts, ranked by popularity:

  1. Accounting for Cloud: Stop Saying Capex Vs. Opex
  2. The Open Cloud Manifesto: Much Ado About Nothing
  3. Application Lifecycle in the Cloud
  4. Why (and What) Every Business Exec Needs to Know About Cloud Computing
  5. Thoughts on Amazon EC2 Spot Instances
  6. The Legendary Enterprise Sale: Goodbye and Thanks for All the Bluebirds
  7. Amazon Reserved Instances: Do They Make Business Sense?
  8. Cloud Computing Jobs: A Leading Indicator
  9. Cloudcenters and Infrastructure Web Services: What's the difference?
  10. What are Amazon EC2 Compute Units?

Phil Wainwright writes in his Enterprise Cloud Cross-Currents post of 1/15/2010 to the Enterprise Irregulars blog:

The new year has kicked off with some contrasting cross-currents for enterprise cloud aficianados and neophytes alike. On the positive side of the balance sheet, there’s a new and surprising Gartner prediction that a fifth of enterprises will have migrated all their IT to the cloud by 2012. I say surprising, not because I disagree with it, but because it originates from Gartner, whose analysts usually remain more conservative. “No IT assets in two years? That prediction seems pretty extreme, even for the most enthusiastic cloud-embracing enterprises,” says Joe McKendrick.

I myself have been making bullish predictions for cloud in a webcast discussion this week (recording here) with fellow Enterprise Irregular bloggers Vinnie Mirchandani and Dennis Howlett alongside Appirio’s Narinder Singh, followed up by blog post musing on the credibility of private cloud.

On the negative side, there were two significant outages at cloud providers. Salesforce.com had a poor start to 2010 with a one-hour outage on the first full working day of the year. As SearchCloudComputing’s report points out, “The service interruption ensures the company’s uptime for 2010 will not rise above 99.9% availability.” Meanwhile, on Jan 2, Ruby-on-Rails platform-as-a-service provider Heroku, which hosts its service on high-end machine instances at Amazon EC2, was down for close to an hour when a routing glitch switched that entire layer of EC2 machines off air

<Return to section navigation list> 

Cloud Security and Governance

James Urquhart asks Does the Fourth Amendment cover 'the cloud'? in this 1/17/2010 review of David A. Couillard’s Defogging the Cloud: Applying Fourth Amendment Principles to Evolving Privacy Expectations in Cloud Computing (see below):

One of the biggest issues facing individuals and corporations choosing to adopt public cloud computing (or any Internet service, for that matter) is the relative lack of clarity with respect to legal rights over data stored online. I've reported on this early legal landscape a couple of times, looking at decisions to relax expectations of privacy for e-mail stored online and the decision to allow the FBI to confiscate servers belonging to dozens of companies from a co-location facility whose owners were suspected of fraud.

However, while I've argued before that the government has yet to apply the right metaphor to the modern world of networked applications and data, there has been little literature that has actually dissected the problem in detail. Even worse, I've seen almost no analysis of how the United States Constitution's Fourth Amendment, which guards against unreasonable searches and seizures, applies to Internet-housed data.

… I just had the pleasure of reading an extremely well-written note in the June 2009 edition of the Minnesota Law Review titled "Defogging the Cloud: Applying Fourth Amendment Principles to Evolving Privacy Expectations in Cloud Computing (PDF)." Written by David A. Couillard, a student at the University of Minnesota Law School expected to graduate this year, the paper is a concise but thorough outline of where we stand with respect to the application of Fourth Amendment law to Internet computing. It finishes by introducing a highly logical framework for evaluating the application of the Fourth Amendment to cases involving cloud-based data. …

(Graphics Credit: Flickr/Thorne Enterprises)

David A. Couillard’s Defogging the Cloud: Applying Fourth Amendment Principles to Evolving Privacy Expectations in Cloud Computing Note 93 Minn. L. Rev. 2205 (2009) for the June 2009, Volume 93, No. 6 of the Minnesota Law Review carries this abstract:

It took nearly a century after the invention of the telephone for the Supreme Court to recognize that the Fourth Amendment could be applied to the content of private telephone conversations. Today, the Internet is in a similar state of limbo, with courts reluctant to grant Fourth Amendment protection to data placed in a medium that has been perceived as inherently public in nature. This perception has begun to shift as Internet technology becomes faster, more widespread, and more mobile. “Cloud computing” is the trendy phrase used to describe this change. Rather than merely a medium of mass communication, the ethereal Internet “cloud” is now used as a virtual platform for storing and interacting with data that are intended to remain private yet accessible anywhere. Although some courts have recently recognized limited protection for e-mails and text messages, these narrow holdings are not universal. The third-party doctrine further complicates the issue when content and quasi-transactional data are being stored by cloud service providers.

This Note argues that because the Internet has evolved to allow new uses, data placed in the cloud merit some level of Fourth Amendment privacy protection. Fourth Amendment protection requires a subjectively reasonable expectation of privacy. Because limited means exist to conceal virtual containers in the cloud, methods such as encryption and password protection should be analogized to virtual opacity rather than the lock-and-key analogy that has been dismissed by some scholars. Finally, courts should acknowledge the landlord-tenant nature of the relationship between the cloud service provider and the user, and thus the use of cloud platforms should not create a categorical waiver of Fourth Amendment protection under the third-party doctrine.

Robert Westervelt reports about Alex Stamos’ Cloud Computing Models and Vulnerabilities: Raining on the Trendy New Parade presentation to the BlackHat 2009 conference in a 7/31/2009 Researchers say search, seizure protection may not apply to SaaS data article for SearchSecurity.com:

Firms embracing Software as a Service (SaaS) are not protected from government and civil search and seizure actions and may not be informed if their SaaS data is seized from their provider, according to a researcher studying the issue.

"In cloud computing, you will not have the ability to fight seizure before it happens," said Alex Stamos, co-founder and partner of security consultancy iSEC Partners Inc.. "You may not even know. There are no legal requirements for [SaaS providers] to notify you, and in fact, they may be gagged from doing so."

Stamos is referring to the SaaS model, in which the entire IT stack, from the servers to the front-end JavaScript software, is hosted outside the company walls. Since the SaaS data is off premise, it could be considered unprotected by the Fourth Amendment, which guards against unreasonable searches and seizures. As a result, law enforcement could potentially only be required to get a subpoena to seize a company or individual's data residing in a SaaS vendor's servers, Stamos said. To issue subpoenas, which command a person to appear before court or produce documents, there are less legal hurdles to overcome. A search warrant, by contrast, requires probable cause to get approved.

Here’s the abstract of the Alex Stamos, Andrew Becherer and Nathan Wilcox presentation:

Cloud computing is an unstoppable meme at the CIO level, and will dominate corporate IT planning for the next several years. Although they do offer the promise of cost savings for many organizations, the basic ideas behind abstracting out the corporate datacenter greatly complicates the tasks of securing and auditing these systems. While there has been excellent research into low-level hypervisor and virtualization bugs, there has been little public discussion of the “big picture” problems for cloud computing. These include virtualized network devices, browser same-origin issues, credential management and many interesting legal challenges.

Our goal with this talk will be to explore the different attack scenarios that exist in the cloud computing world and to provide a comparison between the security models of the leading cloud computing platforms. We will discuss how current attacks against applications and infrastructure are changed with cloud computing, as well as introduce the audience to new types of vulnerabilities that are unique to cloud computing. Attendees will learn how to analyze the threat posed to them by cloud computing platforms as either providers or consumers of software built on these new platforms. Our platforms for discussion include Salesforce.com, Google Apps, Microsoft Office Live, Google AppEngine, Microsoft Azure, Amazon EC2, and Sun.

The BlackHat archives report the availability of MOV video and MP3 audio files of the presentation, but the MP3 doesn’t appear to be available.

Thanks to Chris Hoff (@Beaker) for the heads-up on Alex’s presentation.

Abel Avram digs into a recent MSDN article by Jonathan Wiggs in an Advice for Securing Data in Windows Azure post of 1/15/2010 to InfoQ.com:

In a recent MSDN article entitled Crypto Services and Data Security in Windows Azure, Jonathan Wiggs provides advice on securing data stored and processed through Windows Azure. InfoQ explored the topic in more detail to understand some of the security ramifications which come with deploying an application to the cloud.

When working with Windows Azure, Wiggs advises the use of the basic cryptographic support offered by Cryptographic Service Providers (CSP):

“A consistent recommendation is to never create your own or use a proprietary encryption algorithm...
the algorithms provided in the .NET CSPs are proven, tested and have many years of exposure to back them up.”

He also suggests using the RNGCryptoServiceProvider class for generating random numbers because this ensures a high entropy of the numbers generated making it difficult to be guessed.

CSP offers support for encrypting data and signing messages, but all of that is done with the help of encryption keys which are basically strings. Properly storing and protecting these keys is paramount in ensuring adequate data security. Windows Azure does not keep data encrypted by default, while SQL Azure does not provide encryption yet, according to Wiggs.

However, the article does not mention though how to protect the storage keys - this leaves the issue open for the user to solve. In order to protect the cryptography keys, Wiggs proposes several solutions:

  • Replace the keys regularly
  • Make them available only to the people who need to have access to them
  • Diagram the flow of data to be aware how data is consumed and by who, so that you can evaluate the risks involved and determine how to deal with them …

Lori MacVittie suggests Following Google’s Lead on Security? Don’t Forget to Encrypt Cookies in this 1/15/2010 security analysis:

In the wake of Google’s revelation that its GMail service had been repeatedly attacked over the past year the search engine goliath announced it would be moving to HTTPS (HTTP over SSL) by default for all GMail connections. For users, nothing much changes except that all communication with GMail will be encrypted in transit using industry google-logo standard SSL, regardless of whether they ask for it by specifying HTTPS as a protocol or not. In the industry we generally refer to this as an HTTPS redirect, and it’s often implemented by automatically rewriting the URI using a load balancing / application delivery solution.

Widely regarding as a good idea, and I’m certainly not disagreeing with that opinion, SSL secures data exchanged between the client and the server by encrypting every request and response using a private/public key exchange. This is a Good Idea and the general advice that “you should do this too” is sound; protecting data in transit from prying eyes eliminates the possibility that someone with ill intent might “sniff” out data and steal a user’s e-mail messages. Given the number of small and medium businesses that rely upon GMail for business-related communication and that some of that communication might be considered confidential or sensitive, this simple security mechanism is certainly one that has a high value with minimal risk and costs associated with implementation.

Dmitry Sotnikov recommends a Case-Study on Secure SaaS in his post of 1/15/2010:

Security and data protection are key concerns for any cloud solution. I truly believe that this is also one aspect that you cannot just improve over time. No matter how agile you are security needs to be there by design.

Unfortunately most cloud vendors/SaaS-providers still don’t tell enough about the way they protect customer data – which we know is a bad idea.

From that perspective you might find this case study which Microsoft has just posted worth reading: Systems Manager Offers Security-Enhanced, Hosted Solutions with Programming Framework. The case study lists some of the technologies used in Quest OnDemandQuest Software’s Systems Management as a Service product family.

There’s more to security than just encrypting internet traffic. The case study discusses how latest technology such as Windows Identity Foundation and Active Directory Federation Services 2.0 helped us make sure that customers are always in control of their data, which includes not just protecting data from those who should not have access (including Quest’s own engineers!) to it but also a convenient and secure way to delegate access to those who should.

I hope this helps you get a good overview to one of the approaches to cloud security. Read the case study here.

<Return to section navigation list> 

Cloud Computing Events

• Joe McKendrick wrote Second Annual SOA-Cloud QCamp Announced! on 1/17/2010:

ebizQ has announced its second annual Cloud QCamp, a virtual event scheduled on April 7[, 2010].

We will be bringing together leading industry experts and practitioners explored the role of service-oriented architecture (SOA) and business process management (BPM) in supporting cloud-computing initiatives. The conference will help enterprises cut through the hype and focus on issues surrounding cloud computing, covering Infrastructure as a Services (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). This year's QCamp will also focus on development of Private Clouds in Enterprises.

I will lead a session on "The Economics of Cloud Computing." The economics of cloud computing can look enormously attractive, especially when weighing the costs of storage or processing at a few cents per instance or gigabyte, versus the tens of thousands of dollars in up-front investments required for on-site solutions. Cloud providers can deliver economies of scale not available to individual enterprises. But over the long run, do these huge savings hold up or collapse for enterprises? What about the costs associated with integration, configuration, data deduplication, and monitoring? Also, do enterprises need to look beyond cost and consider other potential benefits of cloud computing, such as the ability to focus resources on the business, versus IT maintenance? What about costs related to loss of control and customization? Or potential loss of competitive advantage that may be inherent in on-site, customized systems? This session will examine the economic pros and cons of on-demand versus on-site computing, and where these approaches may or may not work.

Joe continues with descriptions of his two sessions and several other presentations by “industry experts and practitioners.”

• Reuven Cohen (@ruv) announced CloudCamp Haiti (Fundraiser) Jan 20, 2010 in this 1/14/2010 post:

About CloudCamp Haiti (virtual unconference):

CloudCamp Haiti is a virtual unconference held as a public webinar. CloudCamp-in-the-Cloud builds upon the popular CloudCamp format by providing a free and open place for the introduction and advancement of cloud computing. For this event, we are raising funds to donate to the aid effort in Haiti.

Using an online meeting format attendees can exchange ideas, knowledge and information in a creative and supporting environment, advancing the current state of cloud computing and related technologies.

Please help us spread the word, twitter, facebook, IM, tell your neighbours and friends. Hashtag #CloudCampHaiti or copy and paste this post on to your blog.

Registration: http://cloudcamp-haiti-2010.eventbrite.com/

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Randy Bias offers links to his top 10 posts in Cloudscaling on a Tear – 2009 in Review of 1/15/2010:

Here’s a list of our top ten blog posts in 2009 (in order of most read) if you want to go back and review.

  1. Amazon’s EC2 Generating 220M+ Annually
  2. VMware vs. Amazon … ROUND ONE … FIGHT!
  3. Why is Amazon’s SAS70 Audit Bogus?
  4. EngineYard uses Chef, a Puppet Alternative
  5. The “Open” Cloud is Coming
  6. VMware’s vCloud API Forces Cloud Standards
  7. Amazon Threatens VPS Market
  8. On Second Thought…How Big Is AWS Really?
  9. Infrastructure-as-a-Service Builder’s Guide v1.0
  10. Defining Infrastructure Clouds

Bill McColl covers “Three generations of tools: SQL, MapReduce, Cloudcel” in his 25 Years of Big Data: From SQL To The Cloud post of 1/14/2010:

Back in 1985, the world was pre-web, data volumes were small, and no one was grappling with information overload. Relational databases and the shiny new SQL query language were just about perfect for this era. At work, 100% of the data required by employees was internal business data, the data was highly structured, and was organized in simple tables. Users would pull data from the database when they realized they needed it.

Fast forward to 2010. Today, everyone is grappling constantly with information overload, both in their work and in their social life. Most data today is unstructured, and most of it is in files, streams or feeds, rather than in structured tables. Many of the data streams are realtime, and constantly changing. At work, most of the data required by employees is now external data, from the web, from analytics tools, and from monitoring systems of all kinds - all kinds of data about customers, partners, employees, competitors, marketing, advertising, pricing, infrastructure, and operations. Today what's needed is smart IT systems that can automatically analyze, filter and push exactly the right data to users in realtime, just when they need it. Oh, and since no one wants to own data processing hardware and software any more, those IT systems should be in the cloud.

<Return to section navigation list> 

blog comments powered by Disqus