Data centers have been around for decades, and most people probably don't know what they are or possibly don't care. When you use some type of service or web site that is available on the public Internet, there is a very good chance its being hosted in a data center somewhere. Having worked in Information Technology for over 20 years I have had the opportunity to see this technology evolve.
Data centers have always been built around the premiss of hosting thousand of servers reliably and securely. These building are also notorious for consuming huge amounts of power, by the last estimate I heard they consume about 2-3% of all U.S. power generation. These buildings house thousand of servers, network equipment, and several other types of various support systems. They also have the ability cool the tremendous amount heat generated, including supplying power to all the equipment, and supply backup power when the main power fails.
As the cost of energy goes up, it has gotten so expensive to run these data centers they're now looking at ways of decreasing cost by improving energy efficiencies. Somethings they have done is buy more energy efficient servers and equipment, utilizing virtualization, also making simple changes to better manage cooling.
Modular Data Centers
One major change in data center creation is the use of shipping containers to provide modular expansion of a data center compute infrastructure. Basically shipping containers or similarly built structures are loaded with servers and support equipment, then shipped to a location and basically just plugged in. Its an over simplified explanation, but provides a general overview of the technology.
This concept was first introduced in October 2006 when Sun announced its "Project Blackbox" which was the showed off prototype container for data centers. Now known as Sun Modular Datacenter (or Sun MD) is a portable data center built into a standard 20-foot shipping container manufactured and marketed by Sun Microsystems (which is now owned by Oracle Corporation).
Unconventional Server Design
Most organizations buy their servers from companies like HP, Dell, and a few other manufactures. These companies make solid, fault tolerate and very fast servers, with extra bells and whistles (such as remote management, high-end raid controllers, etc.). Because of all these features they're also very expensive, it's easy for them to cost $15,000-20,000 or even higher on a nicely equipped server.
With Google's scale, they took a more radical approach that has worked very well for them. Instead of buying conventional off the shelf servers, they put a tray in a rack with a motherboard, hard drive, power supply and a battery. Google's uses a grid architecture that allows them to treat these machines as a general purpose compute node, rather then a specific purposed server. What this means, is that all equipment is part of the generic compute infrastructure as a whole, and doesn't perform a specific task. This provides them massive flexibility in the infrastructure, amazing cost saving and unparalleled reliably. If they experience massive server outages, the resilience of their infrastructure will help them whether these types of outage.
If you look at Facebook's Open Compute Project or servers from manufactures like HP, Dell and others, you can see the influence that Google's server design has on modern server infrastructure.
If you know anything about data centers, you know that companies will put them around the world to keep the infrastructure close to their customers and provide protection against natural disasters, geopolitical conflicts, and power or communications outages. There have been several backhoes that have taken out fiber-optic lines, which have taken out whole data centers because they didn't have multiple paths to the Internet. Organizations strategically place their data center around the world, so they can increase the responsiveness to visitors requests for information, while reducing or eliminating any downtime due to a massive outage.
Google, and Facebook Data Center and Initiatives
Have you ever wondered what it looks like inside Google data centers or what it takes to run cloud infrastructure? I will note that these videos are from the companies that host this equipment, so I am not releasing any trade secrets that they're not already publicly sharing with the world.
Google Container Data Center Tour
Security and Data Protection in a Google Data Center
This video tour of a Google data center highlights the security and data protections that are in place at our data centers.
Introducing the Open Compute Project
New technologies from Facebook and industry partners to create more energy efficient and economical data centers. Details on the industry-wide initiative are available at http://opencompute.org/.