Posted on: April 15th, 2011 by Bill
It’s been over a week since Facebook published all of their server, power supply and data center design. They can be found on the Open Compute Project at http://www.opencompute.com/. I’ve been involved with Facebook since before the patent application, and now for those who follow my blog, you can see where we have been going for the new dyanmic in facilities design. It has been a priviledge to be involved with the server, power supply and facility teams during the past 24 months. It has been a deeply rewarding experience with a group of very capable professionals who, in addition, are great folks to boot. I personally swear to the Jay Park cocktail napkin design story, as I have personally seen said napkin. Someday, it will be placed in the Facebook Musuem or Hall of Fame. Let’s be honest. Facebook and the social media and web content businesses who work, write and manage custom code for their businesses have the unique opportunity to integrate their application and server approaches and then spin the critical facility solution to speak and serve those apps and hardware most pointedly. So what do we do about the rest of us that have apps that run on a host of systems. We think the first, best step is hardware standarization to a commodity system. The operating system is a commodity already, whether that’s open or closed protocol. There’s no difference to this and any app that cold run on a variety of similar platforms provdied by a large set of suppliers. This work was started over a decade ago, at Google. In short – “We buy a lot of servers, we need them cheap and identical and fast”. We recall seeing the earliest Google servers in the Equinix facilities in Ashburn, VA, in the early 21st century. The Facebook solution seeks to further this via mass industrialization the process while opening the approach to the marketplace. What’s different today is the push from Facebook, and others in that business, to drive hardware to a new dynamic. The Facebook and Google approaches aren’t for everyone, today. There’s not a data center manager in the world who would not kill to run a homogeneous platform system, where they only have to worry about the apps (the FB facilities possess a very simply power and cooling infrastructure). We all recognize that these business are cloud computing on steroids. While some may dismiss cloud computing in the private sector, there’s no doubt that we should look at the open compute solution as an ability to render the hardware side of the business as a commodity. Based on shear volume and magnatude, Google, Microsoft, Apple or Yahoo! simply hit it harder than the rest of us for this type of server. However, some folks can employ this as a transitional step going forward as they migrate and reshape a more mature systems inventory, not unlike blades 10 years ago (another Google idea). We also feel is in the same vein as open source operating systems, like Linux, or widely adopted systms such as Windows and Java. The question remains is how long this will take for the hardware manufacturers’ take up the slack – or resist the trends and pay open compute lip service. The sinister alternative is that the open source server initiatives become an adjunct to the hardware manufacturers traditional business. We feel that the traditional server and storage business is better integrated to their cost and manufacturing models and likely to be far more profitable for them for the next several years. We do agree that the Facebook Open Source initiative does seek to place the impetus and control of the data processing business back into the hands of the users. WHAT A CONCEPT! The direct benefit of this is a known electrical and mechanical solution set for the critical facility environment. This means cost, schedule and operations become very well know, by a large community, very, very quickly. What is also means is that server computer power (as viewed in MIPS/kW or TB/kW) should plummet in price. That’s not going to make some folks happy.
Tags:
data center,
data centers,
modular power,
power distribution,
PUE,
UPS power