Friday, November 30, 2007

Gartner Data Center Conference -- Polls

Here are some more Garnter Data Center Conference Polls that I found interesting:

Question: When are you starting a CMDB?
  • Now: 41%
  • 6 months: 7%
  • 6-12 months: 18%
  • end of 2009: 21%
  • not planning: 13%

Question: Who is your CMDB Vendor?
  • BMC: 22%
  • CA: 9%
  • IBM: 7%
  • HP: 21%
  • Managed Objects: 1%
  • Home grown: 14%
  • Service desk: 15%
  • Other (NI2, Caimit, 1%
Question: What are your top Network Operations Pressures?
  • VoIP: 8%
  • MPLS: 8%
  • Wireless: 12%
  • Compliance: 3%
  • Security: 3%
  • Network Faults: 15%
  • Proactive Prevention and Performance: 27%
  • Meeting SLA's: 12%
Question: What is your #1 priority for 2008?
  • Network Device configuration: 8%
  • Network Operations Management: 32%
  • Performance Reporting: 20%
  • Traffice Analysis: 11%
  • Capacity Planning/emulation: 16%
  • Other: 3%
  • No Plans: 10%

Wednesday, November 28, 2007

TheRegister Article on Google's Iowa Data Center

I just had to do a quick post so you could enjoy the title of The Register's article about Google in Iowa. "Virgin Mary Appears in Google's Iowa Data Center"

The Register always has unique (yeah, that's it) take on stories and humorous twist

Check it out here

Gartner - BCP Session

Yesterday I attended an informative session on Business Continuity Planning. I loved their definition of Business Continuity Management -- because it encompasses SO much that people tend to forget about. Their organization groups under BCP were:
  1. Business Recovery
  2. Contingency Planning
  3. Business Operations
  4. Information Security Management
  5. Pandemic Planning
  6. Crisis Management (very important one with lots of sub components)
  7. Damage Assesment
  8. IT Disaster Recovery
Here are a couple of polls that were taken of the attendees in the session.

Question: Do you have a business continuity Management Office?
1. Yes: 55% 2. No: 44$ 3. Don't Know: 1%

If you have a BCM office, where does it fit within the organization?
  1. CFO (11%)
  2. CIO (26%)
  3. CISO (10%)
  4. COO (17%)
  5. CRO (Chief Risk Officer): 17%
  6. Don't Know (6%)
  7. Other (13%)
BTW: Having it in the COO office is the ideal according to Gartner. have heard me tout ITIL before and it looks like I need to catch up on ITIL v3. Version 3 includes continuity management guidelines.

Paradigm Shift

Without sounding too much like an analyst and over-generalizing the tech industry as a whole, I really believe we are in the middle of a paradigm shift for technology. I’m old enough to remember mainframes, but never operated or administered them (unless doing COBOL programming in college counts). So there was the mainframe era, the client/server era and whatever-we’re-in-now era. I’ve loved the concept of utility computing ever since the hype (and over-hyping) began. I think it has a ton of potential and some really important concepts and intelligent people behind it. As many others have pointed out, the internet companies have contributed a significant amount to changing architectures used in IT. Perhaps they can be credited for driving much of the needed change that allowed for such enormous scalability (there, the words paradigm shift AND scalability should sit well for the search engine spiders :) A comment from a Gartner session yesterday summed it up nicely…. “cloud computing has ‘some’ degree of truth to it, but also a lot of fog”.

I’ll keep this post as short as possible so I don’t blab on too long and lose readers (assuming you have made it this long). I wanted to link to some utility computing and virtualization articles I liked and then make a few links out to some thoughts on data center containers/black box (DataCenter in a box part IIIa).

Bert Armijo and Peter Nickolov from 3Tera wrote an article recently on Fishtrain about services that virtualization needs adapt to the utility computing model. It is a very good article about future concepts and why virtualization is “not a complete utility computing solution”

The additional service that I would add is security. I’ve been a big fan of Christopher Hoff’s blog that frequently discusses virtualization security and potential vulnerability attack angles. And speaking of innovative technologies and industry shifts, check an excellent post on Security and Disruptive Innovation part III. Security needs to be improved in virtualization, but even more so as it spans across a utility computing implementation.

Network World also ran an interesting article on virtualization security and the realization that many are coming to for their implementations and how some have not even started their implementation because of security issues.

Because I am in the data center business I always digress to the physical part of the infrastructure when the ‘virtual’ data center is mentioned. To me there is no such thing as a virtual data center because it is the one true ‘real’, tangible asset in the infrastructure equation. So when I read about Amazon EC2 and 3Tera, I love the utility computing concepts and having infrastructure virtualized across physical data centers. Of course, with my recent white paper on site selection I also automatically assume geographically disperse data center locations to account for BCP plans and risk avoidance.

A final paradigm shift item I’ll mention is workload lifecycle and management. I don’t know if I completely understand it yet, but I have spent a fair amount of time on the Platespin web site and feel they have a very complete set of products. As it relates to a new and better way to deploy, manage and control your infrastructure I would recommend anyone gives their products a consideration. There is also a decent joint presentation from Dell, Microsoft and Platespin on their respective technologies here

Ok, so there is the paradigm shift in infrastructure architecture and deployment options. Let’s go up a level and look at the data center as a whole. If you’ve read my blog for any amount of time you know I am intrigued, interested, and perplexed by the container model that Rackable, Sun, APC and others have come out with and Google patented, but was dropped as a research project.

There are some interesting comments on the Slashdot post about Intel Data Centers. Some of the interesting points I noticed from these comments are:

1. Chuck Thacker from Microsoft has a very interesting PowerPoint presentation on data centers as a container model. It is a 26 slide presentation full of their research and insight to the topic.

2. There are references to the recent news about Sun’s BlackBox being used underground in Japan and using Geo-exchange for cooling and heat exchange.

3. A user comment: The reason a "data center in a box" sounds so attractive is that the amortization schedules are different for IT equipment and buildings. If building infrastructure can last its advertised 25-30 year life then a tilt-up or factory assembled type of building structure is more cost-effective than containerized data centers architecturally.”

The thing I have always been thinking about, and that was brought up many times in the Slashdot comments, was just what in the world was the practical application of the data center container? With Google, Sun, Microsoft and others seriously looking at it and doing such deep research on the possibilities, you simply have to think that there is something that they have found that makes business sense and that they have justified.

More later --- back to the Gartner conference for now…..

Google - Renewable Energy Push

While I don't think this is necessarily anything new.... Google announced a new program yesterday called Renewable Energy Cheaper Than Coal. The goal is to produce one gigawatt of renewable energy capacity that is cheaper than coal. The hope is to do it in years instead of decades.

Check out the Reuters article here

Tuesday, November 27, 2007

Gartner Conference - Polls

Just a quick post to give some background on the Gartner Data Center Conference that I am currently attending. During the opening comments and first keynote they took a few polls of the audience.

I think these are important --- to profile the average attendee and show real data about the industry. Here is what was covered so far:

Poll Question: What is the make up of your Data Center?
40% Mainframe, Linux, Unix, Windows
25% Unix, Linux, Windows
10% Mainfram, Unix, Windows
10% Unix, Windows
(I couldn't write fast enough to get the rest :) )

Poll Question: Do you have server consolidation projects?
1. No Plans -- 3%
2. Looking into it -- 17%
3. Project Under way -- 50%
4. Already completed a project, may do another -- 30%

Poll Question: How long have you worked in IT?
< 2 yrs: 1%
2-5yrs: 2%
5-10yrs: 8%
11-20yrs: 35%
21-30yrs: 40%
31-40yrs: 13%
40+yrs: 1%

Poll Question: Do you have a long term strategy for Infrastructure and Operations?
Yes: 42%
No: 37%
Unsure: 21%

Sunday, November 25, 2007

HSBC Questioning $1 billion Niagra County Data Center

The Niagra Gazette reports that global banking giant HSBC is reconsidering the $1 billion data center it had planned for Cambria, NY. The data center would add 56 jobs, averaging $76k and would add $14.5 million in property tax collections.

HSBC is not terminating the project, but just looking at alternate sites in the county (otherwise they will give up the $89.5 million in tax breaks. The bank indicated that current business climate was the reason for them to step back and stay in planning phase for a while.

In January of this year Data Center Knowledge reported the plans for the 275,000 sq. ft. facility.

Check out the Niagra Gazette article here

Saturday, November 24, 2007

BroadGroup - 1mil sq ft of Fresh Capacity

BroadGroup Consultancy (London) announced build plans that include more than 1 million square feet of data center capacity. Most of the planned capacity, it was stated, will serve one or more large companies. Other projects for the space included managed services by ISP's, colocation and disaster recovery planning.

A little over a month ago BroadGroup predicted the constant demand for European data centers.

BroadGroup is an independent consultancy based in London and focuses on analyzing and interpreting business strategy. BroadGroup also runs the data centre portal

Check out the WHIR article here

Thursday, November 22, 2007

Happy Thanksgiving

Happy Thanksgiving!

Just wanted to wish everyone well this Thanksgiving. I'm heading to the Gartner Data Center Conference on Monday and will of course report all that I can in what time allows

Also--check out Network World's Top IT Turkeys of 2007. :)

Monday, November 12, 2007

Parallel Computing and Cognitive Fitness

I just finished reading the article "Cognitive Fitness" in the November 2007 issue of Harvard Business Review. The article covers new research in neuroscience about staying sharp and exercising your brain. One of the items you often find in these articles and research is to learn something completely new and/or something you don't normally deal with.

After reading this article I went back to surfing the web.....and came across some information on parallel computing. Perfect! This is something that I've always considered out of my realm of comprehension, yet I have always been very interested in it. As an added bonus there are some insights to draw in the data center industry and links to information about Google!

First -- Microsoft. Microsoft has made a couple of moves this year to indicate a trend in parallel computing and perhaps a tie-in to container-style data centers....stay with me, I'll get to that. In July of this year interviewed Burton Smith about programming languages and parallel computing. Smith oversees research in programming languages for parallel hardware architectures.
Multicore processors are driving a historic shift to a new parallel architecture for mainstream computers. But a parallel programming model to serve those machines will not emerge for five to 10 years, according to experts from Microsoft Corp.
Then, last Friday, Microsoft Research hired veteran supercomputer researcher Dan Reed. Reed's mission is to take a "green field approach" to the spiraling power and reliability requirements of large data centers.
"There is a sea change in computing coming at the intersection of multicore and large data centers, and working on this is one of the most exciting things I can imagine doing," said Reed. There's no single path to the parallel programming models needed to support tomorrow's multi core processors, said Reed. "It will take a variety of efforts in areas such as functional languages, transactional memory, extensions of existing languages and new higher level tool kits," he said.
And as we all know now, Microsoft is building 'mega data centers' to support the massive computing infrastructure required for internet-scale initiatives and Microsoft Research. also had an interesting article about a month ago on the Cloud Computing initiative for Google and IBM. Google's Christophe Bisciglia explained:
"It's no longer enough to program one machine well; to tackle tomorrow's challenges, students need to be able to program thousands of machines to manage massive amounts of data in the blink of an eye."
The Google/IBM initiative is to advance large-scale distributed computing by providing hardware, software and services to Universities. Google is providing several hundred of their custom-built computers, while IBM will provide its BladeCenter and Sytem X Servers. The initiative (and article) are pretty interesting; check it out here

The other Google link I found returned me to the 'brain workout' I was receiving by reading about parallel computing. Check out the slides and presentations on Distributed Systems and Cluster Computing at Google -- here

Ok, so now the container angle. Caveat---remember I'm a newbie for the parallel computing topic, so tell me if I'm way off here. Google's infrastructure is more or less, clusters of networked CPU's orchestrated for various tasks and applications. Microsoft hints at similar items when talking about writing programming languages for parallel computing. Why not call a fully-loaded container of computers a cluster, write 'internet-scale' applications that will be able to reference multiple clusters (i.e.: thousands of CPUs across different containers) and still utilize the multi-core processors within each computer. Ok -- I know, this is essentially what Google does now across their data centers -- but why not change up the data center model. Place compute units (in this case, a full container) at geographically disperse locations and then reference them with an parallel-optimized OS or language. The benefit to shipping containers in this case is not having the high up-front costs involved with building the mega data centers that they are today, and they can move the containers to where the cheap power and land are without too much trouble. With such dense computing and more raw power than form factor, the container is a good fit, and the portability and quick build time are added benefits. I still advocate that security is an absolute must add-on for shipping container parks, but retro-fit a warehouse and BYOUG (bring your own UPS and Generator).

Ok -- that's enough brain exercising for one night (maybe a couple).

Sunday, November 11, 2007

Intel IT (and shipping containers, Part III)

About a year ago I listened to HP discuss their consolidation plans of reducing their data centers to a few key hubs. Recently, Intel has published some of the details surrounding their consolidation plans. Brently Davis has a nice YouTube video explaining the details.

Intel also launched their new power-efficient Penryn processors today.

A little while back I received my Winter 2008 issue of Premier IT -- Intel's magazine for sharing best practices. It is a pretty nice magazine -- usually vendor magazines are 80%+ a pure marketing vehicle, but Intel's is actually quite infomative. The "Transforming Intel IT" article in this issue was particularly interesting. I continue to be hung up on the exact use of the shipping container model for data centers. I still picture trailer parks full of black boxes and fiber hooked up as if they are getting HBO. :)

I have a number of items (and links) queued up for a longer post on shipping containers, the Google patent of the modular data center, and potential (practical) uses of the container model, but for now, I wanted to point out the interesting quotes from this Intel magazine article.

The article explains that Intel is evaluating all types of innovation......

We’ve determined that our compute servers operate quite well at a higher ambient temperature than do other systems such as storage; by comparison, the storage environment requires much cooler temperatures (10 percent to 20 percent lower) and more floor space per unit. By segmenting storage systems into smaller rooms that are tuned to the specific needs of storage,
we could run the compute servers at higher temperatures, around 80 degrees Fahrenheit.
The second item is about containers:

The cost of building a new data center is extremely high—between USD 40 million and USD 60 million. As an alternative, we are considering placing high-density servers on racks in a container similar to those you see on container ships and trucks. We estimate that the same server capacity in this container solution will reduce facility costs by 30 percent to 50 percent versus a brick-and-mortar installation. Because it’s a small, contained environment, cooling costs are far less than for traditional data centers. Even if we build a warehouse-like structure to house the containers (thus addressing security and environmental concerns), the cost is dramatically less per square foot. In fact, the difference is so great that with this solution, brick-and-mortar data centers may become a thing of the past.
The site requires (free) registration, but once logged in, the article can be found here

Finally -- a presentation on their site for the energy efficiency opportunity had a cool slide on delivering data center optimization:

3.7 TFlops
25 racks
512 servers
1000 sq. ft
128 kW

3.7 TFlops
1 rack
53 blades
40 sq.ft
21 kW

Thursday, November 08, 2007

Gartner Data Center Conference

Just a quick post to say that I will be attending the 26th annual Gartner Data Center Conference. This is being held in Las Vegas, Nov.27-30. I have my schedule ready to go and am really excited about attending this conference.

If anyone else will be attending and would like to meet up -- drop me an email.


For whatever reason, I've been surfing a lot lately on IBM. IBM is an absolutely enormous company and have their hands in just about everything. Here are some of the things I have been looking at lately:
Today, System x is the second largest server group in IBM (based on revenue) next to the System z, and by 2011 IBM expects it to be the largest server group

Sunday, November 04, 2007

Symantec State of the Data Center Report 2007

Last Tuesday Symantec (SYMC) announced the release of their 2007 State of the Data Center Report. The international study surveyed managers of Global 2000 and other large companies. The magazines, web sites and company white papers are constantly full of industry statistics and trend monitoring, but I think this report did a nice job of doing the legwork necessary to get real data from those facing the issues in the data center today and presenting it in a clear and concise manner. I think one sentence in the paper summarizes the main point nicely:

Essentially, data center managers are being asked to deliver more high-quality services in an increasingly complicated environment, yet their budgets are relatively flat. As a result, data center managers find they adopt cost containment strategies that make use of new technologies, including virtualization, and new management approaches, such as those that automate routine processes.

Here are some of the highlights that I gleamed from reading the report:

  • Of the five issues (of factors impacting today's Data Centers) I think #2 and #5 are the big ones (in my mind). #2 is staffing and #5 is Disaster Recovery/Business Continuity Planning. Staffing has been noted several times in the press and is obviously becoming a large issue that managers must deal with.
  • Better preparedness for a disaster now versus two years ago was listed by 53% of the repsondents. When thinking of locations for your DR and BCP plans, don't forget my Site Selection white paper.
  • Not surprising, the always fun statistic proved true once again for a cause of downtime. Twenty-eight percent of respondents listed "change or human error"as a chief reason for downtime. Although some stories have down played ITIL, I think for this reason alone you will see an increased usage of the ITIL guidelines in data centers. This obviously plays into the staffing issues raised as well.
  • The report has good information on virtualization plans. I think it will be interesting to see how Microsoft fits into this market in the near future. I don't believe they will pose a serious threat to VMWare, but will most likely balance out the market a little more and have a decent percentage. VMWare was the top product listed in the U.S. but almost half of Asia-Pacific respondents are using Microsoft virtualization, with only 35% going to VMWare. I haven't finished watching it yet, but here is a video of Eric Traut from Microsoft, presenting on Microsoft's virtualization technologies (it also mentions Windows 7).
  • The outsourcing statistics were interesting. Fourty-two percent of U.S. managers said they utilize outsourcing, while 61% of non-U.S. organizations are. "Among the most common tasks outsourced by both U.S. and non-U.S. organizations are server maintenance, backups, storage management, archiving, and business continuity."
This is, overall, a very good report and worth the read. Check out the press release here

Friday, November 02, 2007

Chicago Colo Armed Robery

Many years ago I remember CI Host as being a reputable, large hosting company in the industry. It seems as though they have gone down hill in recent years, and this recent event makes the bad even worse.

They have had at least four intrusions to their data centers in the last two years!! In the most recent event intruders apparently cut through the walls with a power saw. At least 20 servers were stolen and a night manager was tazered. To make matters worse, CI Host staff was not quick to alert customers or even admit the breach.

I predict a mass exodus from this facility (old DC pun intended)

Check out the article at The Register here