Sunday, December 30, 2007

2008 Predictions

As promised, here are my 2008 Predictions. I don't think my predictions from last year (fiber to the home) came true at all -- as I visit my in-laws who can't even get a cable-modem or DSL in their new neighborhood (yet).

Ok...here goes:

1. As I have eluded to in the past, I am a big fan of utility computing. It does receive a fair amount of hype, but I can see the true potential in it and why people will start to see the light in coming years. Events that 365 Main and Rackspace encountered in 2007 helped paint the picture for web sites that only had a single hosting presence to wake up and take a more serious look at utility computing. Amazon EC2, S3, 3Tera and Joyent are pretty cool and worth a look for most anyone. I think the tools to measure, monitor and deploy utility computing will definitely grow in the enterprise realm, now that the internet sites/companies have pioneered so much. The growth of utility computing and enterprise interest should also help along the WAN accelerations and optimization industries as well. BTW--there was an interesting response to Intel's question about what emerging trends are in compute models - check it out here.

2. 2008 will see the hype phrase "Software as a Service" finally fizzle. There's no rocket science here people - get over the warm-fuzzy terminology.

3. Green/New Energy sources will continue to be big news in 2008 (big stretch here, huh?). Only two things I'll offer here.... the first, is to take a look at the Kleiner Perkins (Venture Capital firm) list of greentech companies they are funding --- here. I listend to Tom Perkins on the Venture Voice podcast recently -- what an interesting and amazing person! The second item is wind; Kansas took recent headlines , but it will continue to grow as a new energy source in 2008. There was an interesting story in Sunday's Des Moines Register about the wind industry in Iowa....some interesting stats and stories to see how the industry is growing here.

4. With the data center industry booming, I think 2008 will see more data center related services come out. I.E.- this article

5. Companies to watch / will be acquired? Akamai (an established company, but they will be doing exciting things), Riverbed , Savvis , Platespin , 3tera , and Joyent

6. Political ads will continue to be skipped on my Tivo . Living in Iowa means the privilege of having the first caucus, but the trade is enduring 24x7 ads on TV from the candidates. Maybe after the caucuses are over the ads will take a break until next fall.

7. Death to the RIAA in 2008 -- Alleluia!!

8. Here is my shot in the dark prediction -- at least 1 large acquisition will take place in 2008 in the Colocation or CDN industry. There has been a lot of activity here and some quality players; I think the market has hit a spot that something big will happen next year.

Tuesday, December 25, 2007

Merry Christmas!

Merry Christmas!


Dear Lord, I've been asked to thank Thee for the Christmas turkey before us, a turkey which was no doubt a lively, intelligent bird, a social being capable of actual affection. Anyway, it's dead and we're gonna eat it.
-- Berke Breathed
(as seen in Forbes Magazine)


I just wanted to wish everyone a Merry Christmas and prosperous 2008!!

I will be attempting some technology and data center predicitions, just as soon as I am done reading what everyone else has predicted, so I can pick the good ones and make them my prediction as well. :)


Wednesday, December 19, 2007

Sun BlackBox in Australia

It seems like the Sun BlackBox has arrived in Australia.

Check out Tom Worthington's post/review of it here

Sunday, December 16, 2007

Chicago Data Center Market

As a follow-up to a comment I made in my last post -- Chicago is a RED hot market for data centers. ZDNet has an article "Bragging Rights: Chicago as green data center powerhouse".

It mostly talks about the new Microsoft data center, but lists the reasons why they chose Chicago. It seems like the hot area has shifted -- it was Texas, is now Chicago, and who knows where is next! Hopefully Chicago will not become as over-crowded as Texas is now.

MLB and Orbitz Case Studies

Network World is probably in the top 3 for my favorite magazines and web sites to visit. They really do a good job of capturing the news, but also delivering some good interviews that demonstrate real world use of the technologies they cover. Two particular stories caught my attention recently that I enjoyed:

The first was a story of Major League Baseball Advanced Media -- I visit mlb.com and stlcardinals.com frequently. The article interviews Ryan Nelson, director of operations for MLB Advanced Media. MLB uses Joyent services to 'dial-up' and 'dial-down' their use of servers and compute power based on the seasonal load and needs. I had heard of Joyent before, but never really looked into them. It's a pretty cool service, and seeing how MLB.com uses their service helps solidify the concept. Yes, it's another company jumping on the "cloud computing" bandwagon, but they offer some pretty innovative solutions, coupled with some cool Sun hardware and technologies. I had a brief introduction to Sun virtualization technology at the Blackbox event earlier this year, but if you haven't checked Solaris Zones it is worth a look. MLB has data centers in New York and Chicago, and thanks to the infrastructure they have setup they can move utilization between centers (for upgrades and such) on the fly. It sounds like Ryan Nelson has a pretty cool job playing around with this infrastructure and new technology. Check out this interview/article here

The second article (thanks for the link Ben) from Network World is about travel web site Orbitz. Like everyone else in the industry, Orbitz is trying to go green -- or as CIO Bahman Koohestani put it, "taking his IT operations carbon neutral". One thing that I liked about Orbitz (in this article) is that they know how much energy they use, monitor their use on a daily basis, and they know how much cooling they use for various parts of their operations. Going green is great, but keeping very close tabs on energy use and mining the data is even better (in my opinion). Orbitz has two large data centers in the Chicago area and Koohestani touts it as an excellent place to locate data centers. Chicago is RED hot now days for data centers. Like other stories we have seen about company strategies -- Orbitz is slashing the number of servers used, and consolidating data center operations world wide. The article has all of the details about how they are greening its IT operations.

Thursday, December 13, 2007

Renewable Energy Capital of America

Continuing the push for Iowa as an ideal place to locate business, Governor Chet Culver announced that Iowa was the "renewable energy capital of America". Playing off the attention Iowa gets for political caucusing Culver encouraged campaign staff and media to rethink what they knew about Iowa.

Culver went on to list some of the statistics to back up the renewable energy claim:
  • Iowa ranks #1 in ethanol production, #2 in biodiesel production and #3 in wind energy production
  • In 2006, Iowa alone accounted for over 30 percent of U.S. ethanol production and 25 percent of U.S. biodiesel production.
  • Iowa is over 1,000 wind turbines which create almost 1,000 megawatts of energy, generating enough power to service 250,000 homes.
  • The $500 million Iowa Values Fund and the new $100 million renewable energy research and development initiative called the Iowa Power Fund offer businesses the tools and financial resources they need to be successful. Administered through our state Department of Economic Development, these funds provide tax credits, incentives, loans and regulatory assistance to foster business development and job creation.
Check out the complete press release here

Tuesday, December 04, 2007

Mission Critical Magazine

As an information junkie I tend to visit a lot of web sites and read a lot of blogs. I also frequent a lot of tech industry magazines. A new magazine has come out that I will be adding to my list - Mission Critical Magazine. I have only had a short time to check out the site, but what I have seen so far is really good.

On their home page is a link to Chris Crosby's presentation on 5 Data Center myths. I attended this session at Gartner's Data Center Conference and really enjoyed it. At the conference there was a nice follow-up presentation of a case study that Digital Realty Trust implemented. Chris described their POD Architecture and Gating Process and how it was applied to the customer requirements.

Check out Mission Critical Magazine!

International Site Selection

When I did my site selection white paper back in October, I limited it to the United States. Adam Trujilo over at searchdatacenter.com has a nice write-up on some of the international studies done on site selection as well as references to recent data center activities abroad.

Check it out here

Monday, December 03, 2007

Security in Ten Years

Just a quick link to an interesting post where security guru Bruce Schneier has a conversation with Marcus Ranum.

The conversation with appear in Information Security Magazine this month. While there is nothing earth-shattering in the conversation, it is a good read.

Check it out here
---------------------

Also check out the SANS Top 20 2007 Security Risks

Friday, November 30, 2007

Gartner Data Center Conference -- Polls

Here are some more Garnter Data Center Conference Polls that I found interesting:

Question: When are you starting a CMDB?
Responses:
  • Now: 41%
  • 6 months: 7%
  • 6-12 months: 18%
  • end of 2009: 21%
  • not planning: 13%

Question: Who is your CMDB Vendor?
Response:
  • BMC: 22%
  • CA: 9%
  • IBM: 7%
  • HP: 21%
  • Managed Objects: 1%
  • Home grown: 14%
  • Service desk: 15%
  • Other (NI2, Caimit, Service-now.com): 1%
Question: What are your top Network Operations Pressures?
  • VoIP: 8%
  • MPLS: 8%
  • Wireless: 12%
  • Compliance: 3%
  • Security: 3%
  • Network Faults: 15%
  • Proactive Prevention and Performance: 27%
  • Meeting SLA's: 12%
Question: What is your #1 priority for 2008?
  • Network Device configuration: 8%
  • Network Operations Management: 32%
  • Performance Reporting: 20%
  • Traffice Analysis: 11%
  • Capacity Planning/emulation: 16%
  • Other: 3%
  • No Plans: 10%

Wednesday, November 28, 2007

TheRegister Article on Google's Iowa Data Center

I just had to do a quick post so you could enjoy the title of The Register's article about Google in Iowa. "Virgin Mary Appears in Google's Iowa Data Center"

The Register always has a......um.... unique (yeah, that's it) take on stories and humorous twist

Check it out here

Gartner - BCP Session

Yesterday I attended an informative session on Business Continuity Planning. I loved their definition of Business Continuity Management -- because it encompasses SO much that people tend to forget about. Their organization groups under BCP were:
  1. Business Recovery
  2. Contingency Planning
  3. Business Operations
  4. Information Security Management
  5. Pandemic Planning
  6. Crisis Management (very important one with lots of sub components)
  7. Damage Assesment
  8. IT Disaster Recovery
Here are a couple of polls that were taken of the attendees in the session.

Question: Do you have a business continuity Management Office?
1. Yes: 55% 2. No: 44$ 3. Don't Know: 1%

If you have a BCM office, where does it fit within the organization?
  1. CFO (11%)
  2. CIO (26%)
  3. CISO (10%)
  4. COO (17%)
  5. CRO (Chief Risk Officer): 17%
  6. Don't Know (6%)
  7. Other (13%)
BTW: Having it in the COO office is the ideal according to Gartner.

Finally....you have heard me tout ITIL before and it looks like I need to catch up on ITIL v3. Version 3 includes continuity management guidelines.

Paradigm Shift

Without sounding too much like an analyst and over-generalizing the tech industry as a whole, I really believe we are in the middle of a paradigm shift for technology. I’m old enough to remember mainframes, but never operated or administered them (unless doing COBOL programming in college counts). So there was the mainframe era, the client/server era and whatever-we’re-in-now era. I’ve loved the concept of utility computing ever since the hype (and over-hyping) began. I think it has a ton of potential and some really important concepts and intelligent people behind it. As many others have pointed out, the internet companies have contributed a significant amount to changing architectures used in IT. Perhaps they can be credited for driving much of the needed change that allowed for such enormous scalability (there, the words paradigm shift AND scalability should sit well for the search engine spiders :) A comment from a Gartner session yesterday summed it up nicely…. “cloud computing has ‘some’ degree of truth to it, but also a lot of fog”.

I’ll keep this post as short as possible so I don’t blab on too long and lose readers (assuming you have made it this long). I wanted to link to some utility computing and virtualization articles I liked and then make a few links out to some thoughts on data center containers/black box (DataCenter in a box part IIIa).

Bert Armijo and Peter Nickolov from 3Tera wrote an article recently on Fishtrain about services that virtualization needs adapt to the utility computing model. It is a very good article about future concepts and why virtualization is “not a complete utility computing solution”

The additional service that I would add is security. I’ve been a big fan of Christopher Hoff’s blog that frequently discusses virtualization security and potential vulnerability attack angles. And speaking of innovative technologies and industry shifts, check an excellent post on Security and Disruptive Innovation part III. Security needs to be improved in virtualization, but even more so as it spans across a utility computing implementation.

Network World also ran an interesting article on virtualization security and the realization that many are coming to for their implementations and how some have not even started their implementation because of security issues.

Because I am in the data center business I always digress to the physical part of the infrastructure when the ‘virtual’ data center is mentioned. To me there is no such thing as a virtual data center because it is the one true ‘real’, tangible asset in the infrastructure equation. So when I read about Amazon EC2 and 3Tera, I love the utility computing concepts and having infrastructure virtualized across physical data centers. Of course, with my recent white paper on site selection I also automatically assume geographically disperse data center locations to account for BCP plans and risk avoidance.

A final paradigm shift item I’ll mention is workload lifecycle and management. I don’t know if I completely understand it yet, but I have spent a fair amount of time on the Platespin web site and feel they have a very complete set of products. As it relates to a new and better way to deploy, manage and control your infrastructure I would recommend anyone gives their products a consideration. There is also a decent joint presentation from Dell, Microsoft and Platespin on their respective technologies here

Ok, so there is the paradigm shift in infrastructure architecture and deployment options. Let’s go up a level and look at the data center as a whole. If you’ve read my blog for any amount of time you know I am intrigued, interested, and perplexed by the container model that Rackable, Sun, APC and others have come out with and Google patented, but was dropped as a research project.

There are some interesting comments on the Slashdot post about Intel Data Centers. Some of the interesting points I noticed from these comments are:

1. Chuck Thacker from Microsoft has a very interesting PowerPoint presentation on data centers as a container model. It is a 26 slide presentation full of their research and insight to the topic.

2. There are references to the recent news about Sun’s BlackBox being used underground in Japan and using Geo-exchange for cooling and heat exchange.

3. A user comment: The reason a "data center in a box" sounds so attractive is that the amortization schedules are different for IT equipment and buildings. If building infrastructure can last its advertised 25-30 year life then a tilt-up or factory assembled type of building structure is more cost-effective than containerized data centers architecturally.”

The thing I have always been thinking about, and that was brought up many times in the Slashdot comments, was just what in the world was the practical application of the data center container? With Google, Sun, Microsoft and others seriously looking at it and doing such deep research on the possibilities, you simply have to think that there is something that they have found that makes business sense and that they have justified.

More later --- back to the Gartner conference for now…..

Google - Renewable Energy Push

While I don't think this is necessarily anything new.... Google announced a new program yesterday called Renewable Energy Cheaper Than Coal. The goal is to produce one gigawatt of renewable energy capacity that is cheaper than coal. The hope is to do it in years instead of decades.

Check out the Reuters article here

Tuesday, November 27, 2007

Gartner Conference - Polls

Just a quick post to give some background on the Gartner Data Center Conference that I am currently attending. During the opening comments and first keynote they took a few polls of the audience.

I think these are important --- to profile the average attendee and show real data about the industry. Here is what was covered so far:

Poll Question: What is the make up of your Data Center?
Responses:
40% Mainframe, Linux, Unix, Windows
25% Unix, Linux, Windows
10% Mainfram, Unix, Windows
10% Unix, Windows
(I couldn't write fast enough to get the rest :) )

Poll Question: Do you have server consolidation projects?
Responses:
1. No Plans -- 3%
2. Looking into it -- 17%
3. Project Under way -- 50%
4. Already completed a project, may do another -- 30%

Poll Question: How long have you worked in IT?
Responses:
< 2 yrs: 1%
2-5yrs: 2%
5-10yrs: 8%
11-20yrs: 35%
21-30yrs: 40%
31-40yrs: 13%
40+yrs: 1%

Poll Question: Do you have a long term strategy for Infrastructure and Operations?
Responses:
Yes: 42%
No: 37%
Unsure: 21%

Sunday, November 25, 2007

HSBC Questioning $1 billion Niagra County Data Center

The Niagra Gazette reports that global banking giant HSBC is reconsidering the $1 billion data center it had planned for Cambria, NY. The data center would add 56 jobs, averaging $76k and would add $14.5 million in property tax collections.

HSBC is not terminating the project, but just looking at alternate sites in the county (otherwise they will give up the $89.5 million in tax breaks. The bank indicated that current business climate was the reason for them to step back and stay in planning phase for a while.

In January of this year Data Center Knowledge reported the plans for the 275,000 sq. ft. facility.

Check out the Niagra Gazette article here

Saturday, November 24, 2007

BroadGroup - 1mil sq ft of Fresh Capacity

BroadGroup Consultancy (London) announced build plans that include more than 1 million square feet of data center capacity. Most of the planned capacity, it was stated, will serve one or more large companies. Other projects for the space included managed services by ISP's, colocation and disaster recovery planning.

A little over a month ago BroadGroup predicted the constant demand for European data centers.

BroadGroup is an independent consultancy based in London and focuses on analyzing and interpreting business strategy. BroadGroup also runs the data centre portal datacentres.com

Check out the WHIR article here

Thursday, November 22, 2007

Happy Thanksgiving


Happy Thanksgiving!

Just wanted to wish everyone well this Thanksgiving. I'm heading to the Gartner Data Center Conference on Monday and will of course report all that I can in what time allows

Also--check out Network World's Top IT Turkeys of 2007. :)

Monday, November 12, 2007

Parallel Computing and Cognitive Fitness

I just finished reading the article "Cognitive Fitness" in the November 2007 issue of Harvard Business Review. The article covers new research in neuroscience about staying sharp and exercising your brain. One of the items you often find in these articles and research is to learn something completely new and/or something you don't normally deal with.

After reading this article I went back to surfing the web.....and came across some information on parallel computing. Perfect! This is something that I've always considered out of my realm of comprehension, yet I have always been very interested in it. As an added bonus there are some insights to draw in the data center industry and links to information about Google!

First -- Microsoft. Microsoft has made a couple of moves this year to indicate a trend in parallel computing and perhaps a tie-in to container-style data centers....stay with me, I'll get to that. In July of this year EETimes.com interviewed Burton Smith about programming languages and parallel computing. Smith oversees research in programming languages for parallel hardware architectures.
Multicore processors are driving a historic shift to a new parallel architecture for mainstream computers. But a parallel programming model to serve those machines will not emerge for five to 10 years, according to experts from Microsoft Corp.
Then, last Friday, Microsoft Research hired veteran supercomputer researcher Dan Reed. Reed's mission is to take a "green field approach" to the spiraling power and reliability requirements of large data centers.
"There is a sea change in computing coming at the intersection of multicore and large data centers, and working on this is one of the most exciting things I can imagine doing," said Reed. There's no single path to the parallel programming models needed to support tomorrow's multi core processors, said Reed. "It will take a variety of efforts in areas such as functional languages, transactional memory, extensions of existing languages and new higher level tool kits," he said.
And as we all know now, Microsoft is building 'mega data centers' to support the massive computing infrastructure required for internet-scale initiatives and Microsoft Research.

EETimes.com also had an interesting article about a month ago on the Cloud Computing initiative for Google and IBM. Google's Christophe Bisciglia explained:
"It's no longer enough to program one machine well; to tackle tomorrow's challenges, students need to be able to program thousands of machines to manage massive amounts of data in the blink of an eye."
The Google/IBM initiative is to advance large-scale distributed computing by providing hardware, software and services to Universities. Google is providing several hundred of their custom-built computers, while IBM will provide its BladeCenter and Sytem X Servers. The initiative (and article) are pretty interesting; check it out here

The other Google link I found returned me to the 'brain workout' I was receiving by reading about parallel computing. Check out the slides and presentations on Distributed Systems and Cluster Computing at Google -- here

Ok, so now the container angle. Caveat---remember I'm a newbie for the parallel computing topic, so tell me if I'm way off here. Google's infrastructure is more or less, clusters of networked CPU's orchestrated for various tasks and applications. Microsoft hints at similar items when talking about writing programming languages for parallel computing. Why not call a fully-loaded container of computers a cluster, write 'internet-scale' applications that will be able to reference multiple clusters (i.e.: thousands of CPUs across different containers) and still utilize the multi-core processors within each computer. Ok -- I know, this is essentially what Google does now across their data centers -- but why not change up the data center model. Place compute units (in this case, a full container) at geographically disperse locations and then reference them with an parallel-optimized OS or language. The benefit to shipping containers in this case is not having the high up-front costs involved with building the mega data centers that they are today, and they can move the containers to where the cheap power and land are without too much trouble. With such dense computing and more raw power than form factor, the container is a good fit, and the portability and quick build time are added benefits. I still advocate that security is an absolute must add-on for shipping container parks, but retro-fit a warehouse and BYOUG (bring your own UPS and Generator).

Ok -- that's enough brain exercising for one night (maybe a couple).

Sunday, November 11, 2007

Intel IT (and shipping containers, Part III)

About a year ago I listened to HP discuss their consolidation plans of reducing their data centers to a few key hubs. Recently, Intel has published some of the details surrounding their consolidation plans. Brently Davis has a nice YouTube video explaining the details.

Intel also launched their new power-efficient Penryn processors today.

A little while back I received my Winter 2008 issue of Premier IT -- Intel's magazine for sharing best practices. It is a pretty nice magazine -- usually vendor magazines are 80%+ a pure marketing vehicle, but Intel's is actually quite infomative. The "Transforming Intel IT" article in this issue was particularly interesting. I continue to be hung up on the exact use of the shipping container model for data centers. I still picture trailer parks full of black boxes and fiber hooked up as if they are getting HBO. :)

I have a number of items (and links) queued up for a longer post on shipping containers, the Google patent of the modular data center, and potential (practical) uses of the container model, but for now, I wanted to point out the interesting quotes from this Intel magazine article.

The article explains that Intel is evaluating all types of innovation......

We’ve determined that our compute servers operate quite well at a higher ambient temperature than do other systems such as storage; by comparison, the storage environment requires much cooler temperatures (10 percent to 20 percent lower) and more floor space per unit. By segmenting storage systems into smaller rooms that are tuned to the specific needs of storage,
we could run the compute servers at higher temperatures, around 80 degrees Fahrenheit.
The second item is about containers:

The cost of building a new data center is extremely high—between USD 40 million and USD 60 million. As an alternative, we are considering placing high-density servers on racks in a container similar to those you see on container ships and trucks. We estimate that the same server capacity in this container solution will reduce facility costs by 30 percent to 50 percent versus a brick-and-mortar installation. Because it’s a small, contained environment, cooling costs are far less than for traditional data centers. Even if we build a warehouse-like structure to house the containers (thus addressing security and environmental concerns), the cost is dramatically less per square foot. In fact, the difference is so great that with this solution, brick-and-mortar data centers may become a thing of the past.
The site requires (free) registration, but once logged in, the article can be found here

Finally -- a presentation on their site for the energy efficiency opportunity had a cool slide on delivering data center optimization:

2002
3.7 TFlops
25 racks
512 servers
1000 sq. ft
128 kW

2007
3.7 TFlops
1 rack
53 blades
40 sq.ft
21 kW

Thursday, November 08, 2007

Gartner Data Center Conference

Just a quick post to say that I will be attending the 26th annual Gartner Data Center Conference. This is being held in Las Vegas, Nov.27-30. I have my schedule ready to go and am really excited about attending this conference.

If anyone else will be attending and would like to meet up -- drop me an email.

IBM

For whatever reason, I've been surfing a lot lately on IBM. IBM is an absolutely enormous company and have their hands in just about everything. Here are some of the things I have been looking at lately:
Today, System x is the second largest server group in IBM (based on revenue) next to the System z, and by 2011 IBM expects it to be the largest server group

Sunday, November 04, 2007

Symantec State of the Data Center Report 2007

Last Tuesday Symantec (SYMC) announced the release of their 2007 State of the Data Center Report. The international study surveyed managers of Global 2000 and other large companies. The magazines, web sites and company white papers are constantly full of industry statistics and trend monitoring, but I think this report did a nice job of doing the legwork necessary to get real data from those facing the issues in the data center today and presenting it in a clear and concise manner. I think one sentence in the paper summarizes the main point nicely:

Essentially, data center managers are being asked to deliver more high-quality services in an increasingly complicated environment, yet their budgets are relatively flat. As a result, data center managers find they adopt cost containment strategies that make use of new technologies, including virtualization, and new management approaches, such as those that automate routine processes.

Here are some of the highlights that I gleamed from reading the report:

  • Of the five issues (of factors impacting today's Data Centers) I think #2 and #5 are the big ones (in my mind). #2 is staffing and #5 is Disaster Recovery/Business Continuity Planning. Staffing has been noted several times in the press and is obviously becoming a large issue that managers must deal with.
  • Better preparedness for a disaster now versus two years ago was listed by 53% of the repsondents. When thinking of locations for your DR and BCP plans, don't forget my Site Selection white paper.
  • Not surprising, the always fun statistic proved true once again for a cause of downtime. Twenty-eight percent of respondents listed "change or human error"as a chief reason for downtime. Although some stories have down played ITIL, I think for this reason alone you will see an increased usage of the ITIL guidelines in data centers. This obviously plays into the staffing issues raised as well.
  • The report has good information on virtualization plans. I think it will be interesting to see how Microsoft fits into this market in the near future. I don't believe they will pose a serious threat to VMWare, but will most likely balance out the market a little more and have a decent percentage. VMWare was the top product listed in the U.S. but almost half of Asia-Pacific respondents are using Microsoft virtualization, with only 35% going to VMWare. I haven't finished watching it yet, but here is a video of Eric Traut from Microsoft, presenting on Microsoft's virtualization technologies (it also mentions Windows 7).
  • The outsourcing statistics were interesting. Fourty-two percent of U.S. managers said they utilize outsourcing, while 61% of non-U.S. organizations are. "Among the most common tasks outsourced by both U.S. and non-U.S. organizations are server maintenance, backups, storage management, archiving, and business continuity."
This is, overall, a very good report and worth the read. Check out the press release here

Friday, November 02, 2007

Chicago Colo Armed Robery

Many years ago I remember CI Host as being a reputable, large hosting company in the industry. It seems as though they have gone down hill in recent years, and this recent event makes the bad even worse.

They have had at least four intrusions to their data centers in the last two years!! In the most recent event intruders apparently cut through the walls with a power saw. At least 20 servers were stolen and a night manager was tazered. To make matters worse, CI Host staff was not quick to alert customers or even admit the breach.

I predict a mass exodus from this facility (old DC pun intended)

Check out the article at The Register here

Wednesday, October 31, 2007

Brocade SAN Solution at U of Iowa

Brocade had a press release out yesterday explaining how their SAN directors had been implemented recently. I've always liked Brocade products, thought this was an interesting case study and thought I would see if my brother was paying attention to the blog. :)

Two 48000 Directors are the foundation of the SAN (Storage Area Network) and will scale to as many as 384 concurrently active 4 Gbit/sec ports in a single domain.

Check out the new/press release at CNN here

First LEED Gold Certification



The first Gold Certification for LEED (Leadership in Energy and Environmental Design) has been earned by Digital Realty Trust. The project was for a Fortune Global 500 company (i.e.: someone that could afford it) in Chicago.

The data center is 20,000 sq.ft of raised floor with 4000 kW of available IT load. They started this certification process more than a year ago and have several other projects expecting certifications soon.

Check out the press release here

Monday, October 29, 2007

The HD Web

A short while back I explained what a cool company Akamai was. Back in February of this year I made my prediction for 2007. Well, Akamai continues to blow me away, and I think the prediction was perhaps a little ahead of the market (hey Verizon, how long until I get FiOS?!!)

I forget where I ran across it, but Akamai has a new web site to showcase High Definition video capabilities through their network. It really is pretty amazing, and gets me excited for watching HD over the internet (as soon as the fiber comes to my house).

Check out their site here and cool explanation of it all here.

Sunday, October 28, 2007

Green Data Center

Of the hundreds of articles we've all seen this year on the environment, saving energy, new energy sources and "the green data center", I wanted to point out two recent items I have read that I thought were particularly good.

The print version of the October 15th issue of EWeek ("The Green Issue"), has a lot of good information, statistics and news. I couldn't find a direct link to the issue, but here is their 5 Steps to Green IT.

Secondly, Matt Stansberry at Tech Target is releasing a chapter at a time of his E-book, The Green Data Center: Energy Efficient Computing in the 21st Century. The chapters I have read so far are excellent. Matt has a lot of good information, stats and industry trends. I am anxious for the chapters to come.

Tuesday, October 23, 2007

HP's Dynamic Smart Cooling

Just a quick link to an article about HP debuting their new 'smart' India data center. They claim it has achieved a 40% energy savings due to using their technologies and methodologies.

Check out the San Jose Mercury News article here

Data Center Outsourcing

I ran across a link in my email recently that was interesting, so I continued reading it. EDS has received a "Strong-Positive" rating from Gartner. This comes from the 2007 release of their "MarketScope for Data Center Outsourcing, North America".

EDS was one of 17 companies evaluated and manages over 100 data centers worldwide. Check out their press release here

I of course didn't have a spare $1,995 laying around for the full report, so I searched around - and found a good summary of the report. Unisys was listed on the report ("Positive" rating) and had the summary on their web site. There were a few companies I didn't recognize. I suppose I think too much of the colo companies and not those providing a data center as an outsourced service.

I wish I was able to purchase the report - it looks very thorough and does a nice job of spelling out the evaluation criteria and market segment. Here are some of the companies (with links) that were evaluated in the report:

Acxiom
ACS
Atos Origin
CapGemeni
CGI
CSC
EDS
HP
IBM
Infocrossing
Maintech
Northrup Grumman
Perot Systems
Savvis
Siemens
Sungard
Unisys
Vericenter (recently acquired by Sungard)

Tuesday, October 16, 2007

DC Site Selection

A long while back I mentioned that I was working on a site selection post for the blog. As I was working on it, I decided it was worthy of more than just a post.

I am happy to announce that I have finished a white paper on the topic of Data Center Site Selection. I welcome any and all comments and hope it is informative and helpful.

If you decide to link to the paper, I would ask that you link to this post instead of directly to the PDF.

Link: http://datacenterlinks.blogspot.com/2007/10/dc-site-selection.html


Rath Consulting
October 2007

Sunday, October 14, 2007

Good Code and Computing Overhead

Alistair Croll over at Earth2Tech has an excellent article on coding practices, coding complexity and the effect it has on the data center. It shows how complex our applications have come and all of the different components that go in to making it run.

I'm not sure I could do it justice in discussing or quoting, so check it out here.

Netapp DoD Data Migration

Enterprise Storage Forum has a pretty interesting story on the Defense Contract Management Agency (DCMA) going from 18 data centers to 2. They virtualized the middleware and consolidated data with Network Appliance's Virtual File Manager (VFM).

In addition to consolidating the servers, DCMA also needed to set up a common file system that would work across the enterprise. For this it created a File Area Network (FAN) using software and storage appliances from Network Appliance. DCMA has more than 300TB of storage.
I think we will continue to see stories like this as large projects are under way to consolidate, virtualize and setup common file systems and platforms across the enterprise.

Check out the article here

Monday, October 08, 2007

Server Farm Goes Solar

Server farm company AISO (Affordable Internet Services Online) has built a 2,000 sq. ft. facility, banked with solar panels that generate 12 kilowatts of electricity. Located south of Las Angeles they claim to be 100% solar powered.

To slash energy consumption, AISO.net switched from 120 individual servers to four IBM blades running virtualization software that lets one computer do the work of multiple machines. The cooling system cranks up for only about 10 minutes an hour, and when the outside temperature drops to 60 degrees, air is sucked into the building to cool the servers. Solar tubes built into the roof illuminate the facility's interior.

Check out the CNN article here and AISO site with further details on the implementation here.

Akamai Enters the Data Center


Akamai is a pretty amazing company. They have been around since 1999, survived the dot com bust, losing their co-founder Danny Lewin in the 9/11 attacks and continue to be an innovative company among growing competition.

Today they announced a new application acceleration service. As product marketing manager Neil Cohen explains it, it takes traffic off the WAN and substitutes the Internet. The service utilizes their 27,000 + web points of presence world wide.

Akamai's software achieves faster response times by optimizing Internet routing, the vendor claims. Its server-based software opts, not for the default shortest path first as data traverses the network, but for a route that may look longer on a map but turn out to offer better response time. "We find good latency and an available path," Cohen states. Akamai also adds transport flow optimization and protocol optimization, he adds.
Pricing will be similar to their web application acceleration service. Akamai has had several trial customers for the service that claim to have recorded significant cost savings. Akamai shares jumped 8.7% on the news.

Check out the information in the following places:

Byte and Switch
Red Herring
Internet News
Reuters

Tuesday, October 02, 2007

Terremark Building in Virginia

Totaling 450,000 sq. ft., Terremark is starting on the first 50,000 sq.ft of their data center in Culpeper County Virginia. The $250 million project is expected to put Culpeper County on the map.

With existing facilities in Florida and California, Terremark is looking to provide a little more safety, security and government compliance with the Virginia site. Culpeper is apprixmately 66 miles south west of Washington D.C.

Check out the WHIR news article here (complete with misspelling Culpeper county as a country) :)


View Larger Map

Friday, September 28, 2007

BGP Reform

Just a quick post (because the interesting information is in the article) on efforts underway by the Internet Research Task Force.

Check out this Network World article on research being done to radically change the Internet's core routers and way that BGP works.

Sunday, September 23, 2007

WAN Optimization

Something I have always been simultaneously very interested in and confused by is the wide area network. WAN technologies, optimization and accelerators are creeping up in headlines all over the place. With data centers growing in number and consolidation projects pushing data from geographically disperse locations, the WAN is becoming increasingly important to build and operate. This post is really just to aggregate the information I have run across, help myself understand it better, and with any luck, help someone else interested in the topic as well.

Hardware

  • Matisse Networks has some pretty cool products for what they are calling the "first optical burst switch, purpose-built for scaling metro and campus networks from 10 to 640 Gbps." Their equipment and technology allows you to scale the metro area network beyond 10Gbps while reducing the capital expenditures needed to accomplish it. Optical burst switching combines Ethernet and the enormous bandwidth of DWDM . Matisse is touting it as the next step in the evolution of optical network products ( Fixed wavelength --> Reconfigurable DWDM --> Optical Burst). A whitepaper explaining the technology and products can be found here

"There are four Metro Ethernet scenarios that we see developing.

1. Simple L2 aggregation combined with transport. Limited support for legacy interfaces and no support for TDM. Limited traffic management and service awareness.
2. The God box. Support for TDM grooming, L2 switching and perhaps even MPLS routing.
3. Sophisticated L2 switching and L3 routing. Deep packet classification and traffic management. No support for TDM.
4. The stupid network. Buy dirt cheap commodity transport equipment and manage everything at the wavelength level. Backhaul everything to enormous Cisco and Juniper routers and sort it out there."
  • Click here to check out the Gartner 2006 WAN Optimization Controller Magic Quadrant report.
  • Here are some Cisco implementation stories of their Wide Area Application Services solution

WAN Optimization/Acceleration

  • Mark Weiner over at the Cisco Data Center Networks blog talks briefly about WAN optimization. He mentions a company that saved 3.2TB (Terabits) of WAN traffic and related expenses within a one month period!
  • An interesting lessons learned story comes from Byte and Switch article about Kansas City based 360 Architecture. Theirs is a story of WAN optimizers and consolidation of SAN resources. The article explains their problem, vendor evaluation and eventual 4X improvement in WAN speeds.

Fiber

Finally - the fiber networks themselves are getting attention as well. The attraction of owning your own private fiber network was underscored by Paetec Holding acquiring Mcleod USA for $492 million in stock. Check out the details and article here

Friday, September 21, 2007

Data Center Consolidation

As if we needed further proof that data center consolidations are top of the to-do list for IT departments....

CIO Today has a nice article where they surveyed 29 state CIO's and 80% had developed, proposed or had a consolidation plan in progress. As with the last post...it's interesting to gain some insight to how other organizations have their IT structured. Perhaps a little scary in the case of some of the stories in this article. :)

Check out the article here

World's Largest SANs

Byte and Switch has an initial article on what they are calling the World's largest SAN environments. They give some interesting, if not brief, descriptions of the SANs for JP Morgan Chase, U.S. DoD, NASA, San Diego Supercomputer Center and Lawrence Livermore National Labroatory.

These organizations have pushed over the petabyte level and the goal of the study by Byte and Swtich is to see how companies scale their storage infrastructure and what vendors they are working with. It was interesting to me to see how many are using SGI (I still really like SGI)

Assuming they will divulge more about this list in the future, I'll be anxious to read more of the details. Besides....forget about Petabytes -- let's talk Exabytes!

Check out the Byte and Switch article here

Tuesday, September 18, 2007

Green Data Centers

I have avoided posting too many things on green data centers or programs that are being announced for new 'green' initiatives. If you read my blog there is an exceptionally good chance that you run across a few dozen stories a day on the topic. My personal favorite is the IBM ad where they are taking green paint to the walls of the data center. I don't have anything against the initiatives or press on the subject -- it's just a little overwhelming at times.

Tonight I ran across a very nice article though, that summarized all of the recent news, initiatives and other tidbits quite nicely. It also has a number of links out to websites with all of the pertinent information.

In the article, Bruce Myatt from EYP Mission Critical Facilities covers:
  • The July EPA report and plans to congress ( I agree that this report is a must-read for anyone in the data center business)
  • Initiatives summary for Sun, IBM, Dell, HP, Fujitsu, 365 Main, LBNL and others
  • Finally and perhaps most intriguing is the following quote:
Finally, an anonymous Internet giant in the neighborhood is said to be developing a “fan free” air circulation system to cool its servers. Relying upon natural convection and server fans only, their data centers may require no computer room air-conditioner units or plug fans at all to drive the air circulation in their data centers. Now, that is progress!

Also -- as Rich Miller mentioned today, the DOE is joining the Green Grid.

Check out the article from Bruce in CSE Magazine here

Sunday, September 16, 2007

TEAM Companies - Expansion

TEAM Companies has a second announcement, shortly after the story of the new Madison, WI Data Center. TEAM is also planning a $10 million expansion of their Cedar Falls, Iowa data center. The expansion, 15,000 sq.ft, would double the current footprint.
"Our facility in Cedar Falls is filled. That's the great news," Kittrell said. "But we have more Iowa companies that are looking for our services. So we never like to be in the position where we don't have anything to sell."
Check out the Waterloo Courier article here

Friday, September 14, 2007

Iowa Company Builds New Data Center

Just a quick follow-up post on the announcement yesterday from TEAM Companies.

Check out an article in Forbes on the announcement here.

Thursday, September 13, 2007

TEAM Breaks Ground on New Data Center

TEAM Companies held a groundbreaking ceremony today for a new Data Center in the Fitchburg Technology Campus near Madison, Wisconsin. This groundbreaking is another major step for the Madison area, adding capabilities to a region that is already an innovation hotbed for Information Technology", stated TEAM Companies’ VP of Development Mark Kittrell.

Stating the desire of companies to locate their data centers in secure and obscure places, TEAM focuses on the secondary markets like Madison, WI, Cedar Falls, IA and other upper midwest locations.
We have a proven method of designing, building and operating world-class data centers,” says Kittrell.

TEAM brings a 'lease' option to companies faced with the decision of whether to build their own facility or not. Leasing data center space allows a business to let someone else worry about the operational and overhead aspects of running a data center.

Madison and its suburb of Fitchburg are close and yet far enough to be a perfect location for this type of facility; it allows clients to store their data “off site” from their offices, but it’s also close enough for them to get to their servers within an hour or two. Fitchburg also has plentiful and relatively inexpensive power.
The Fitchburg location will house a 50,000 square foot office building and 20,000 square foot data center.


Tuesday, September 11, 2007

Everyone Wants a Data Center

Columbia, Missouri thinks it would be a "good fit" for the data center industry. They have hired Angelou Economics of Austin, Texas to evaluate the area and see what it needs to do to increase the potential of attracting a data center.
"A REDI (Regional Economic Development Inc.) economic development report to the City Council, prepared with the consultant’s input, said the combination of a skilled workforce and the presence of a large research university are important criteria for attracting a high-tech firm. Good quality of life — with low crime, strong public schools and affordable housing — is also important."
Check out the article here

Monday, September 10, 2007

NSF Data Center Grant

A 3 year, $200,000 grant was awarded to Dr. Xiaorui Wang at the University of Tennessee. The grant will be in support of a proposal he submitted titled “CSR--PDOS: A Holistic Framework for Power and Performance Control in Data Centers.”
"...research which applies a multi-input/multi-output control theory, organizing power consumption and the application of a high-density server on a large-scale data center. This proposal of Dr. Wang’s was chosen from a collection of 410 other proposals presented to the NSF Computer Systems Research."

Check out the article here

Friday, August 31, 2007

Google Dalles Pictures

I haven't taken any photos of the Council Bluffs Google Data Center yet, but thanks to Information Week there are photos of the Dalles, Ore. facility.

Check out the nice photo album of the facility and surroundings here.

Blog Day 2007

Blog Day 2007
Since it is Blog Day 2007 I thought I would put my two cents in (or 5 links). More than 90% of my blogroll is tech related, so this is some what of a challenge to come up with non-tech blogs. I like the idea though and hope that you find these enjoyable.

1. thesimpledollar.com Financial Talk for the rest of us
2. 24-7 Family History Circle Blog from Ancestry.com
3. Dan Pink Author of A Whole New Mind (favorite book of mine I read recently)
4. Happy Neuron I just like the title :) (good site though)
5. Digitally Imported My favorite online radio station

Monday, August 27, 2007

Storage Service Provider (SSP)

A post from the Burton Group made me nostalgic tonight. They brought up the idea of a potential revival in Storage Service Providers. InfoWorld also writes about the new storage cloud and how it will undercut the market dominance from EMC, HP and IBM. I think Nik Simpson states it pretty well: "Those who cannot remember the past are condemned to repeat it."

I immediately went to check blogs from Chuck Hollis and Dave Hitz to see if they had anything to say on the topic. They didn't. Why? Because it is silly. First of all it's a term....nothing more. Journalists and marketing geeks like nothing more than to coin a term to spin the hype machine and see who will pay attention. Storage Service Provider (SSP), SAAS (Storage as a Service), storage cloud, Managed Storage Service....whatever. It is all just 1's and 0's on platters.

Reading the article took me back a bit, as a number of years ago I was a part of a dot-com company that offered online storage. We were like the rest of the dot-coms and offered 1gb, 5gb and 10gb plans, and then had a backup tool that would take your data directly to your online account. We ran the site for many years, and I always remember looking to one company as 'the' big dog of the industry. Storage Networks was the SSP of the time and (I think) had grown quite a large business. They took the enterprise side of the market, while sites like idrive, xdrive, swapdrive and others took the consumer side. Take a look to see what Storage Networks site looked like in 2000.

Today, we have Amazon's S3. I'm still amazed at this offering and almost equally amazed that it hasn't caught on more than it has. I was trying to think if S3 had any real competition and the only thing I could come up with is an equally intriguing offering from Cleversafe. It is fun to see how people are using S3 though. Check out this article about Stardust@home:

The Stardust@home project uses the Amazon Simple Storage Service (Amazon S3) to store and deliver the tens of million of images that represent the data collected from the dust particle aerogel experiment.

Anyway....back to the SSP model. The model is an enterprise one. I really believe that S3 and other online storage services that have survived can only truly cater to home/office users, small companies and some aspects of the hosting world. Enterprise customers want their data close to them and want to know it is managed, protected and monitored at all times. Regulations and data leakage incidents have ensured that enterprise storage is on a short leash at all times.

I do believe that managed storage services (see - even I can't stop from using IT lingo) have a place for some. Inside the data center, or between trusted parties and elaborate SLA and other agreements, it would potentially be beneficial to let someone else manage the scalability, reliability and a few other 'ilities of your storage. If nothing else, perhaps the consumer side of the market will help to push down some of the insane prices that the enterprise players push. Maybe a colo or integrator could offer a pay-for-what-you-use model to the SMB market and see if it flies.

Thursday, August 23, 2007

Netapp Cogeneration

It's too late for me to think up any comments on this article, other than to say that this is pretty cool (literally) stuff.

Very much worth a read.

Check out the article here

Wednesday, August 22, 2007

Google: Council Bluffs Update

The building has already been expanded, the exterior walls are up and they are continuing to hire/recruit for jobs at the Google Council Bluffs facility. They mention "working around weather conditions" -- it has either been extremely hot, or raining constantly here lately, so I'm sure that has hindered things.

I really wish I could drive down there and snap some shots of construction -- assuming that I could even get anywhere near enough to get a glimpse.

Check out the article here.

DataSynapse

On the 'Grid'ComputingPlanet.com website there is an article on Data Synapse and the growing trend of jumping on the virtualization band-wagon. They now consider themselves the fastest growing application virtualization vendor.

I personally like the term grid (for them) better, but I can see a fit with virtualization as well.

"We're actually a lot like VMware in the problem we're solving," said Bernardin. But instead of creating virtual machines, DataSynapse creates "application instances" to maximize application performance, decoupling applications from underlying resources to improve scalability and resilience and set priorities.

"That requires an underlying platform, such as a grid," said Bernardin, who calls grid technology a "precursor" to such advanced functionality.

They expect 50% sales growth this year and a potential IPO next year! I ran across them last year and really like their product offerings. I spoke with a partner rep at the company because I have an idea for a product offering of my own that would incorporate their product. Their GridServer and FabricServer products are extremely cool and worth a look.

I would 'almost' venture to say that they are a good target for merger/acquisition. Perhaps the IPO is a backup plan. Adding/acquiring something like 3Tera's Applogic might be a nice complement....?

Check out the article here

Tuesday, August 21, 2007

Virtualization: I'm OK, You're OK

With all of the hype around virtualization, I couldn't resist another post. A few items I've found while surfing lately have really peaked my interest.

The first one is well over my head, but extremely cool. It's a year old, but I imagine some of the concepts still apply (or have been revised/improved/tweaked). It's called Hardware Virtualization Rootkits, from Dino A. Dai Zovi and can found here. If you are in to security at all, this guy does some insane research.

The second one I ran across was from a favorite blogger of mine - Christofer Hoff. It's a presentation on Virtualization and Network Security. It's an excellent mix of Virtualization concepts, vulnerabilities and solutions from his former company, Crossbeam. Check out the blog post here and presentation here.

The final is a presentation entitled The Virtualized Rootkit is Dead from Matasano, Symantec and RootLabs. It discusses HVM malware, virtualized malware detection and the Samsara framework. (Dino Dai Zovi is on the Matasono team)

Wednesday, August 15, 2007

What I did on my Summer Vacation

It seems like I wrote a multitude of these type of papers when I was in school. For the sake of catching up on links, here is what I did on my 2007 summer vacation.

  • Outside 2.0: As a computer geek, information junkie and perpetual web surfer, this getting outside thing is pretty cool. My family takes an annual trip to visit relatives and a particular town we visit is a wonderful reminder of the real world, wonderful scenery and the perfect place to ditch the phone and laptop (besides, there is no wi-fi hotspot for MILES. On the second day we went to a St. Louis Cardinals game (the 2006 World Champion Cardinals). Although my sons' hero didn't win the game for us like last year, they did win and it was a good (but really really hot) game. The relatives house where we stayed did not have an internet connection, but their neighbors were nice enough to have open wireless access points for me...and a few that had the default router login/password still on it. :)
  • I was able to catch up on a number of podcasts that I listen to and try a few new ones out. I was glad to see that Om Malik has a podcast on the red-hot media network Revision3. Since Om made a few (much more qualified) predictions, I thought I would give it a try also. He stated that he thought Ebay will buy Second Life; 'I' Think Google will buy it and integrate a version of a virtual world into Google Earth. Ok, maybe this is just my dream....but it could happen.
  • I was also able to read a few books I had waiting in the queue. I listened to an audio version of Al Franken reading his book Rush Limbaugh is a big fat idiot. It was very funny and cool to hear Al reading it. I also read Blink, by Malcom Gladwell. This was a good book as well and now I need to back-track and read the predecessor The Tipping point. I'm thinking of a subscription to Audible as a Chrismas Present this year.
  • As an information junkie, I couldn't stay away from blogs for too long. Rich Miller kept me up to speed on the industry and I went back to read several posts from fellow Iowa blogger Trent Hamm. Trent writes a very good (and popular) blog, The Simple Dollar, on 'financial talk for the rest of us'. His story and how he finds the time and ideasvvfor writing his blog was/is inspiring. Check his blog out here.
  • For whatever reason I decided to peruse the quarterly 10-Q statements from Terremark, Equinix and Savvis. They were interesting in their lawyer and SOX-filled ramblings, but one thing caught my eye. From my last post, I had done a Google trends search for the term "data center". Interestingly, the second place region that is searching for that term from 2004-2007 is Singapore. In the Equinix 10-Q I read:
"In March 2007, the Company entered into long-term leases for new space in the same building in which the Company’s existing Singapore IBX center is located (the “Singapore IBX Expansion Project”). Minimum payments under these leases, which qualify as operating leases, total 3,674,000 Singapore dollars (approximately $2,394,000 as translated using effective exchange rates at June 30, 2007) in cumulative lease payments with monthly payments commencing in the third quarter of 2007. The Company is building out this new space in multiple phases. As of June 30, 2007, the Company incurred approximately $11,200,000 of capital expenditures to build out the first phase."
Singapore seems to be a hot spot, and I have to wonder if we'll see announcements out in the near future about data centers being built there.

  • In my Gmail I found a link to a Syska-Hennessy white paper that (due to it being vacation) I had some time to read. It was on outside air economizers and a very interesting read. My new term for the month is enthalpy ( In thermodynamics and molecular chemistry, the enthalpy or heat content (denoted as H or ΔH, or rarely as χ) is a quotient or description of thermodynamic potential of a system, which can be used to calculate the "useful" work obtainable from a closed thermodynamic system under constant pressure. ). As usual, I most likely wouldn't do it justice in trying to explain it, so check it out here.
  • Last, I worked some more on a Data Center Site Selection post I have been developing for some time. Through surfing various sites I ran across an article that mentioned the "Pandemic severity index". From a DR and BCP perspective, I thought this was pretty interesting. Check out the Wikipedia article on it here.

I still have a few days left of vacation -- perhaps I should get back to the outdoors and enjoy it while I can. It may take several weeks to get caught up on work, but the time spent with family, 100+ degree temperature and "are we there yet" car rides was definately worth it.

Wednesday, August 08, 2007

Data Center 3.5

Since I am unable to attend the NGDC conference (I really, really wanted to go), I thought I would have a little fun and coin my own term. I would like to declare DataCenter 3.5. Why? Just because :)

Cisco declared Data Center 3.0, Eric Schmidt declared Web 3.0 and I'm going to declare Data Center 3.5.

This Infoworld article made the dare, so I thought I would oblige. :)

Ok...so I suppose I could attempt to put a little bit behind the statement. Let's take a look:

  • Data Center 1.0: Everything pre-2002 : from small server rooms all the way up to Exodus and other dot-com boom companies. BTW: When searching for Exodus, I found it funny that this is what I came up with. The Equinix IPO was August 11, 2000
  • Data Center 2.0: 2002-2006 : Post dot-com bubble rebuilding. The industry re-groups and numerous other colocation companies come on the scene. Many of the big players start to out-pace the S&P 500.
There you go.... data center 3.5 begins. :) It really is quite silly ; all we really measure in the what-ever x.x is a hype cycle. On that note, I wandered over to a fun little distraction...Google Trends. Here are my finds:

Data Center vs. Data Centre

Data Center companies in 2007 (I thought the 365 main spike was kind of funny)


Hopefully Public Relations 4.0 is next.

Monday, August 06, 2007

DNS Re-binding Attacks

This is a little off-topic, but incredibly interesting(originally found at the O'Reilly blog). It is a paper/presentation by Dan Kaminsky at the recent Black Hat Black Ops 2007 conference on turning your browser into a tcp/ip relay. Anyway....I have only read the first half dozen pages or so, but it is really fascinating. As Artur Bergman puts it in the O'Reilly blog post, "I'm really glad Danny is on our side"


PDF here
Slides here

Sunday, August 05, 2007

Google in Korea

I'm not sure if this means they are exactly 'building' a data center, but the Data Center Journal reports that Google is "transferring servers" to Korea in order to better comply with local regulations.

Other things I have read here and there, certainly show Google's interest in the global market, underwater fiber to Europe/Asia and peering arrangements.

Check out the Data Center Journal article here.
Google's Fiber (older story) here
Google jobs: Strategic Negotiator-Global Infrastructure