Friday, October 24, 2008

Traditional disaster recovery test models outgrow usefulness

Most CIOs at enterprise-level companies are in on the dirty little secret of disaster recovery (DR) testing: The traditional DR test method is outgrowing its usefulness. The complexity of today's environments makes true simulation of recovery from a disaster quite difficult.

CIOs aren't abandoning the method -- there are as yet few alternatives -- but analysts say they would be wise to incrementally increase the scope of testing and look to tools to monitor software configuration changes to increase effectiveness.

SearchCIO article

Wednesday, October 22, 2008

Gotcha! How virtualization savings can vanish

As a consultant at Accenture, Jay Corn has seen IT organizations plan a virtualization deployment to drive down server operating costs - and then realize that they would achieve zero cost savings.

Here's the problem: If you host your servers at a colocation facility, chances are the service provider - not you - will get all of the benefits. The problem lies in the pricing models. "They still look it as one server image and they don't care if it's virtual or physical - they charge the same for it. That's definitely a big gotcha," he says.

ComputerWorld article

Tuesday, October 21, 2008

Google Apps Outages Officially a Part of Our Lives

Google's Gmail suffers an outage, while the search engine's Start Page suffers a bug, disconnecting users from their content. The blips cast another pall over SAAS, cloud computing and Web services at large. We might be able to depend on SAAS, but we must take additional measures to make sure all of the data we transact via desktops and computers is made redundant.

Update: There is a harried Google Apps adviser named Mark whose life I don't envy. Once, sometimes twice a month it seems, he gets to try to sooth angry users of Google Apps, the search engine's Web-based applications that enable collaboration via e-mail, word processing and spreadsheet documents.

eWeek article

Monday, October 20, 2008

Capacity planning and the cloud

One of the problems that cloud computing is trying to solve is the issue of dealing with capacity planning for companies and the services that they offer. Current datacenters for individual companies, and where relevant, for entire websites, are designed to cope with a particular peak load.

The problem with this model is that it means that a large number of machines may sit relatively idle while waiting for the traffic spike that causes them to be used. Meanwhile, these machines are sucking power, wasting management cycles, and ultimately iterating over their own lifespan waiting to be used. Altogether, it's a waste of time and resources on a whole number of levels.

ComputerWorld Article

Tuesday, October 7, 2008

Thinking outside the case: running naked servers

When it comes to data center metrics the one most often talked about is square footage. Nobody ever announces that they've built a facility with Y-tons of cooling, or Z-Megawatts. The first metric quoted is X-square feet. Talk to any data center manager however and they'll tell you that floor space is completely irrelevant these days. It only matters to the real estate people. All that matters to the rest of us is power and cooling - Watts per square foot. How much space you have available is nowhere near as important as what you can actually do with it.
If you look at your data center with a fresh eye, where is the waste really happening?

Serverspecs article

Thursday, October 2, 2008

Cloud computing is stupidity says GNU guru Richard Stallman

Mark Fricker at RPM Technologies brought this article to our attention. Great insight by Richard Stallman.

I just ran across an article in the Guardian (UK) in which GNU creator (and founder of the Free Software Foundation) Richard Stallman minces no words about the cloud computing phenomenon, calling it a trap. TechRepublic bloggers have written skeptically about the concept, especially where it concerns privacy and security issues, and others have reported on particular cloud initiatives such as those of Google and Amazon.

Stallman's comments to the Guardian go beyond merely skeptical, however: It's stupidity. It's worse than stupidity: it's a marketing hype campaign.

Tech Republic article

Tuesday, September 30, 2008

Is downtime more frequent, or more visible?

Are leading Internet sites reliable enough? The New York Times examines web downtime today in a front-page story, which focuses on users' growing reliance upon web services. "Now the Web is an irreplaceable part of daily life, and Internet companies have plans to make us even more dependent on it," writes Brad Stone. "The problem is that this ideal requires Web services to be available around the clock - and even the Internet's biggest companies sometimes have trouble making that happen."

Data Center Knowledge Article