Tuesday, November 10, 2009

google data center cooling

Worried about power costs in your data center? Feeling squeezed by the power company? Dreaming of modular designs and airflow at night? Well, relief is on the way–in four years or so.

This is the bright side–Gartner style. On Thursday, Gartner analyst Michael Bell outlined a few data center tips and factoids worth considering. Among the nuggets at the Gartner Symposium/ITxpo:

By 2008, half of the current data centers won't have the power or cooling capacity to deal with their equipment. Translation: You're toast.
In 2009, power and cooling will be your second biggest data center cost. Most of you are there already and for companies like Google power costs are the top expense.
But by 2011, technology–primarily better cooling strategies, more efficient chips, DC power, in-server cooling and real-time monitoring–will ride to the rescue at least to the point where you'll be able to sleep.
Blade servers are good (sort of). If you think blade servers racked and stacked are a data center fix you're only half right. The blade-a-thon results in denser data centers and more computing power. That's fine and dandy, but now you need more juice to cool things down. Eventually this works out–as technology rides to the rescue. Gartner recommends doing an energy audit to see what your blade servers really consume. Then you can at least make up for the power loss elsewhere (another reason for green IT practices). See chart below:

Since those costs stink, Gartner is advising that technology folks spend time designing their cooling system. Maybe the next boom market will be in air conditioning design And a few more items to ponder as these cooling issues get worked out.

Up your data center temperature from 70 degrees to 74 and humidity levels from 45 percent to 50 percent.
Measure your server energy efficiency. This is getting easier given that the EPA has given some guidance on the topic. Deeper measurements are hard to come by.
Grill your data center hosting company on energy efficiency. This is an important point that I'd bet few companies are doing. Customers should make energy efficiency a priority since your hosting company is only going to pass those costs along.

sun microsystems data center

Sun Microsystems has completed a new data center in Broomfield, Colo., built with efficiencies that the company says will save $1 million a year in electricity costs.

The data center features overhead cooling using Liebert XDs, airside economizing and flywheel uninterruptible power supplies (UPSes) from Active Power.

The project came about when Sun acquired StorageTek back in 2005, so it’s been in the works for a few years now. Both companies had data centers in Broomfield that actually sat on opposite sides of Route 36, a major road, and Sun decided that it would consolidate the two into one. It was able to condense 496,000 square feet of data center space at the old StorageTek campus into 126,000 square feet in the new location, a move that is saving 1 million kwH per month.

The move is also slicing down the amount of raised floor space from 165,000 square feet to just 700 square feet — enough to support a mainframe and old Sun E25K box for testing. The elimination of that much raised floor, including the construction needed to brace it to support such heavy IT equipment, is saving Sun $4 million, according to Mark Monroe, Sun’s director of sustainable computing for the Broomfield campus.

The overhead Liebert XD data center cooling units feature variable speed drive (VSD) fans that allow the supply of air to range from 8kw up to 30kw per rack. The Active Power flywheel UPSes eliminate the need to have a whole room dedicated to housing UPS batteries.

“Flywheels are usually 95% to 97% efficient,” Monroe said. “Battery systems are usually in the low 90s, high 80s.”

Finally, Sun is using a chemical-free method to treat its chilled water system that takes advantage of electromagnetics. The new method allows Sun to reuse the water in onsite irrigation systems and not have to flush out the water as often. It will save about 675,000 gallons of water and $25,000 per year.

In total, the company will be cutting its carbon dioxide consumption by 11,000 metric tons per year, largely because Broomfield gets so much of its power from coal-fired power plants.

san jose data center

When Fortune Data Centers was looking for its first development project in 2006, it anticipated the importance of power and cooling in new facilities. “What we saw was that everybody would be looking for old data centers and then want to add power and cooling,” said John Sheputis, the company’s founder and CEO. “We thought, ‘let’s go find a mission-critical building that already had the power and cooling and then bring the fiber to it.’ We decided that a retiring fab (semiconductor fabrication facility) would have the power and cooling we needed.”

Sheputis and his partners acquired a former Seagate fabrication facility in San Jose, Calif., and are busy transforming the site into a 140,000 square foot data center in the heart of Silicon Valley. Fortune Data Centers plans to complete the 80,000 square foot first phase of the project in October.

It turns out that Sheputis didn’t have to look far to find fiber. The facility is sandwiched between data centers for NTT and Verizon. “The buildings property lines are wrapped in fiber,” said Sheputis.

Fortune Data Centers is led by executives with experience in both IT and real estate. Sheputis founded and managed several Silicon Valley ventures specializing in infrastructure management, including Totality (now a unit of Verizon Business). Tim Guarnieri, the VP and GM of Data Center Operations, is a veteran of the Palo Alto Internet Exchange (PAIX) and Switch and Data (SDXC).

The real estate side of the equation is represented by VP of acquisitions Will Fleming (previously Regional Acquisitions Director for Virtu Investments) and CFO Bruce MacLean, who was President of Embarcadero Advisors and a veteran of Trammel Crow.

The San Jose facility is expected to achieve LEED Gold certification, and will be supported by 16 megawatts of power from PG&E. Sheputis says power will be priced at roughly 9.5 cents per kW hour, which he called PG&E’s “most competitive industrial tariff in Northern California.” The building will be carrier-neutral, with connectivity from AT&T (T), Verizon (VRZN), Level 3 (LVLT) and AboveNet.

Sheputis said the existing power at the site was critical to the project. “There’s no way we would get financing without a building with construction advantages,” he said. “The capital markets are very constrained right now. It’s a huge issue. If you are an independent developer, it’s hard to find financing.”

Sheputis says the model is one that can work in other markets. “If we’re successful, we may remake the market for retiring fabs,” he said.

google container data center

Four years after the first reports of server-packed shipping containers lurking in parking garages, Google today confirmed its use of data center containers and provided a group of industry engineers with an overview of how they were implemented in the company’s first data center project in the fall of 2005. “It’s certainly more fun talking about it than keeping it a secret,” said Google’s Jimmy Clidaras, who gave a presentation on the containers at the first Google Data center Efficiency Summit today in Mountain View, Calif.

The Google facility features a “container hanger” filled with 45 containers, with some housed on a second-story balcony. Each shipping container can hold up to 1,160 servers, and uses 250 kilowatts of power, giving the container a power density of more than 780 watts per square foot. Google’s design allows the containers to operate at a temperature of 81 degrees in the cold aisle. Those specs are seen in some advanced designs today, but were rare indeed in 2005 when the facility was built.

Google’s design focused on “power above, water below,” according to Clidaras, and the racks are actually suspended from the ceiling of the container. The below-floor cooling is pumped into the hot aisle through a raised floor, passes through the racks and is returned via a plenum behind the racks. The cooling fans are variable speed and tightly managed, allowing the fans to run at the lowest speed required to cool the rack at that moment.
“Water was a big concern,” said Urs Holzle, who heads Google’s data center operations. “You never know how well these couplings (on the water lines) work in real life. It turns out they work pretty well. At the time, there was nothing to go on.”

Google was awarded a patent on a portable data centerin a shipping container in October 2008, confirming a 2005 report from PBS columnist Robert Cringley that the company was building prototypes of container-based data centers in a garage in Mountain View. Containers also featured prominently in Google’s patent filing for a floating data center that generates its own electricity using wave energy.

Holzle said today that Google opted for containers from the start, beginning its prototype work in 2003. At the time, Google housed all of its servers in third-party data centers. “Once we saw that the commercial data center market was going to dry up, it was a natural step to ask whether we should build one,” said Holzle.

The data center facility, referred to as Data Center A, spans 75,000 square feet and has a power capacity of 10 megawatts. The facility has a Power Usage Effectiveness (PUE) of 1.25, and when the container load is measured across the entire hangar floor space, it equates to a density of 133 watts per square foot. Google didn’t identify the facility’s location, but the timeline suggests that it’s likely one of the facilites at Google’s three-building data center complex in The Dalles, Oregon.

Data center containers have been used for years by the U.S. military. The first commercial product, Sun’s Project Blackbox, was announced in 2006. We noted at the time that the Blackbox “extends the boundaries of the data center universe, and gives additional options to managers of fast-growing enterprises.”

It turns out that containers have developed as key weapons in the data center arms race between Google and Microsoft, which last year announced its shift to a container model. Microsoft has yet to complete its first container data center in Chicago.

Google Data Center Secrets Now Showing On YouTube

Not long ago, Google data centers were a closely guarded secret. The company's technical innovations were regarded as a competitive advantage.

Atigeo is beta testing a service that will let you make your social profile both portable and more meaningful by giving the profile owner control over its contents and outcomes.
But on April 1, in the spirit of a promise made in 2006 to be more transparent, Google revealed details about its custom servers and its data centers.
Google opened its kimono before more than 100 industry leaders and journalists at its Mountain View, Calif., headquarters and now has posted a video tour of one of its data centers and videos of its presentation on YouTube.

"We disclosed for the first time details about the design of our ultraefficient data centers," Google engineer Jimmy Clidaras said in a blog post Thursday. "We also provided a first-ever video tour of a Google container data center as well as a water treatment facility. We detailed how we measure data center efficiency and discussed how we reduced our facility energy use by up to 85%. The engineers who developed our efficient battery backup solution even brought an actual Google server to the event."

At the Google Data Center Efficiency Summit, Google said that its Power Usage Efficiency (PUE) -- the ratio of total data center power to power used by facility IT equipment -- had improved from an average of 1.21 in the third quarter of 2008 to 1.16 during the fourth quarter of 2008.

A PUE of 1.0 represents a data center with no power loss to cooling or energy distribution. The typical corporate data center, based on 2006 statistics, has a PUE of 2.0 or more.

At 1.16, Google's PUE exceeds the EPA's most optimistic target for data center efficiency in 2011, which is a PUE of 1.2.

Google showed off a data center that it has been operating since 2005. With a 10-megawatt equipment load, it contains 45 shipping containers, each of which holds 1,160 servers. It operates at a PUE of 1.25.

Power efficiency increasingly affects revenue for businesses that depend on data centers. In a presentation at the Google event, James Hamilton, VP and distinguished engineer at Amazon Web Services, explained that while servers look like the major cost of data centers, computing costs are trending downward while power and power distribution costs are flat or trending upward. "Power and things functionally related to power will be the most important thing, probably next year," he said.

One of the more surprising innovations of Google's server design -- seen here in a CNET photograph -- appears to be rather mundane: The company's custom-designed server hardware includes a 12-volt battery that functions as an uninterruptible power supply. This obviates the need for a central data center UPS, which turns out to be less reliable than on-board batteries.

"This design provides Google with UPS efficiency of 99.9%, compared to a top end of 92% to 95% for the most efficient facilities using a central UPS," explains Rich Miller in a post on Data Center Knowledge.

youtube google datacenter

Judging by the heavy interest in last week's look at Google's previously secret server and data center design, I thought it would be useful to note that Google has now put much of the information on YouTube.

The disclosures came at a Google-sponsored conference on data center efficiency, which boils down to getting the most computing done with the least electrical power. The idea is core to Google's operations: the company operates at tremendous scale, tries to minimize its harm to the environment, and has a strong financial incentive to keep its costs low.
There are a number of videos from the conference online, starting with the tour of a Google data center. Google's servers, which the company itself designs, are packed 1,160 at a time into shipping containers that form a basic, modular unit of computing.
Also worth a look is the tour of Google's water treatment facility. Google uses water to cool the hot air the servers produce. Most Google data centers use chillers to cool the water by refrigeration, but one data center in Belgium is experimenting with the use only of the less power-hungry evaporative cooling.
Finally, Google published the proceedings of the conference itself--part one, part two, and part three.

google data center council bluffs iowa

Google is very happy to be located in Council Bluffs, IA.

We announced our plans to build a data center here in early 2007, and today we are a fully operational site that has already begun benefitting our users around the world. We have had an excellent experience in Council Bluffs as we've built out this $600 million investment, and we look forward to being a part of the Iowa community for many years to come.

We're eager to share more information with you about what we're doing in the area. On this site, you'll find information about:

what exactly a data center is
the kinds of jobs that are available
what Google does
how to contact us
our community outreach program
We hope that you use this site to familiarize yourself with the Council Bluffs data center, and feel free to share this site with anyone who may be interested.

We appreciate your help and support, and feel privileged to be part of the Council Bluffs community.

Google Will Delay Oklahoma Data Center

Google will delay construction of its data center in Pryor, Oklahoma, the company confirmed today. The $600 million facility was scheduled to be completed in early 2009, but instead will go online sometime in 2010.

The administrator of the Mid-American Industrial Park in Pryor, where Google has purchased 800 acres of land, told the Tulsa World the slowing economy was a factor in Google’s decision to push back its construction timetable. But the company said it was staggering the deployment of new data center space after bringing several projects online in recent months.

“Google’s data centers are crucial to providing fast, reliable services for our users and we’ve invested heavily in capacity to ensure we can meet existing as well as future demand,” a Google spokesperson said. “This means there is no need to make all our data centers operational from day one. We anticipate that the Pryor Creek facility will come into use within the next 12 to 18 months. Google remains committed to and excited about operating this facility in Mayes County.”

Google announced the Oklahoma data center project in May, 2007, when it purchased 800 acres of land in Pryor for a massive facility that would employ 100 workers with an average salary of $48,000. The Pryor project was the third of four data center construction projects Google announced in the first half of 2007. The search company has completed the first data center in its project in Lenoir, North Carolina and is preparing to begin production in its facility in Goose Creek, South Carolina.

Google Eyes Austria for New Data Center

Google today is contemplating building a data center in Kronstorf, Austria, where it has purchased 185 acres of farmland for the project. The project has been in the works since May, when news of Google’s site location scouting trips in Austria was published on Twitter by Kronstorf residents.

UPDATE: Initial reports from AFP said Google had confirmed that it would build a data center at the Kronstorf site, the company says its process has not yet reached that stage.

“I’m pleased to confirm that we are looking at the potential opportunities offered to us by this site in Kronstorf, with regards to the possibility of building a data center facility,” a Google spokesperson told Data Center Knowledge. “This particular site has a number of features to recommend it – including a good environment in which to do business, excellent economic development team, strong infrastructure and the future possibility of attracting and retaining an excellent workforce.

“We have no immediate plans to start building on the site, as we will next proceed with some technical studies and design work. We are just at the stage of evaluating what future opportunities it might offer us, and we will keep you updated when our plans are firmed up.”

Kronstorf is a town of 3,000 near the city of Linz. The land purchased by Google is near several hydro-electric power plants on the river Enns, which would satisfy Google’s requirement for the use of renewable energy sources in its facilities. Kronstorf also is close to major universities in Linz, Steyr and Hagenberg, which could supply a trained IT workforce.

Google Sorts 1 Petabyte of Data in 6 Hours

Google has rewritten the record book and perhaps extended the benchmark for sorting massive volumes of data. The company said Friday that it had sorted 1 terabyte of data in just 68 seconds, eclipsing the previous mark of 209 seconds established in July by Yahoo. Google’s effort included 1,000 computers using MapReduce, while Yahoo’s effort featured a 910-node Hadoop cluster.

Then, just for giggles, they expanded the challenge: “Sometimes you need to sort more than a terabyte, so we were curious to find out what happens when you sort more and gave one petabyte (PB) a try,” wrote Grzegorz Czajkowski of the Google Systems Infrastructure Team. “It took six hours and two minutes to sort 1PB (10 trillion 100-byte records) on 4,000 computers. We’re not aware of any other sorting experiment at this scale and are obviously very excited to be able to process so much data so quickly.”

A Closer Look at Google’s European Data Centers

Google’s purchase of land in Austria for a possible data center highlights the global nature of the search giant’s infrastructure. Google’s existing European footprint includes several data centers in the Netherlands and one in Belgium, as well as peering centers in major European bandwidth hubs.

Erwin Boogert recently posted new photos of Google’s facility near Groningen in the Netherlands. Erwin originally shot pictures of the facility in 2004, but revisited in late October for a second look. Erwin is an IT journalist who has also put together a Google Maps mashup with more information about Google’s operations in the Netherlands and Belgium.

Google Slows N.C. Build, Foregoing State GrantGoogle has told the state of North Carolina that it won’t meet the job creation criteria for a $4.7 mill

Report: Google Building ‘Fast Lane’ With ISPs

Google is seeking to place content distribution servers within the networks of major ISPs, creating a “fast lane” for its content, according to the Wall Street Journal. The arrangement could allow Google to use less bandwidth to serve its content, but the Journal questioned whether the move would be at odds with Google’s public support for Net neutrality.

Google has responded with a blog post asserting that Google’s edge caching is consistent with network neutrality and current practices of content network providers like Akamai and Limelight.

Google Buys More Land at Lenoir Data Center

Google may be slowing the pace of its data center construction in the short-term, but continues to prepare for major data center expansion over the long term. Google said today that it has paid $3.13 million to buy an additional 60 acres of land in Lenoir, North Carolina, near where the company has built a data center on a 220-acre property. The announcement follows news that Google has slowed the pace of construction at its Lenoir project and told the state of North Carolina that it won’t meet the job creation criteria for a $4.7 million state grant.

“This is a strategic location for our company,” Tom Jacobik, operations manager of the Lenoir facility, told the Asheville Citizen-Times. “We look forward to a long and active presence here.” Google has finished one data center building at Lenoir, which it opened in May. It plans to eventually complete a second data center building at the site.

Rumor: Google Building Its Own Router?

There’s a rumor making the rounds today that Google is building its own router. The report appeared first at the SD Times blog, was picked up at Bnet and is now being discussed on Slashdot. The gist of the rumor is that Juniper loses a big client and Cisco should be worried. Most of Google’s custom hardware development has focused on optimizing its in-house systems for peak efficiency, rather than developing commercial products. Google (GOOG) is declining comment.

Google Throttles Back on Data Center Spending

Google (GOOG) spent $368 million on its infrastructure in the fourth quarter of 2008 as it scaled back its ambitious data center building boom, idling a $600 million project. The fourth quarter capital expenditure (CapEx) total, which was included in today’s earnings release, was less than half the $842 million Google spent on its data centers in the first quarter of 2008.

Will Project Delays Kill Data Center Incentives?

In the big-picture analysis, the decisions by Google and Microsoft to delay major data center projects seem prudent. The projects cost $550 million to $600 million apiece, and neither company appears likely to run out of data center capacity anytime soon.

But state and local officials in Oklahoma and Iowa are focused on a smaller picture. The postponements of Google’s project in Pryor, Oklahoma and Microsoft’s facility in West Des Moines, Iowa dash any hopes that huge data center projects will create jobs in the midst of the economic crisis. Both states offered significant tax incentive packages to attract these companies, with expectations of a payoff in high-tech jobs.

Iowa Gov. Chet Culver “is obviously disappointed by the news that Microsoft has decided to delay their plans for a new data center in West Des Moines,” Culver’s office said in a statement Friday. “This is just one more sign that no one is immune from the economic recession gripping our nation. The Governor remains hopeful that conditions will improve and Microsoft will begin construction on their new facility soon.”

A Glimpse Inside Google’s Lenoir Data Center

Google has cracked opened its data center doors – just slightly – and allowed media to have a look inside one its facilties. The company is renowned for the secrecy surrounding its data center operations, which it considers to be a competitive advantage in its battle with business rivals like Microsoft and Yahoo. Google recently allowed a reporter and photographer from the Charlotte Observer to visit the office area of its data center in Lenoir, North Carolina, about 70 miles northwest of Charlotte.

Google Plans Data Center in Finland

Google has bought a former paper mill in southeastern Finland and is likely to convert the facility into a data center, the company said today. Google will pay $51.6 million for the site, and expects to close on the purchase by the end of the first quarter.

Google (GOOG) has been slowing its data center development in the United States, recently idling a scheduled project in Oklahoma. But the company continues to invest in land deals in Europe in preparation for future data center expansion. In November Google purchased 185 acres of farmland in Kronstorf, Austria for future development as a data center.

Gmail Outage Focused in European Network

Google says that yesterday morning’s Gmail outage was caused by disruptions in its European data centers. The incident was triggered by unexpected issues with a software update, resulting in more than two hours of downtime for the widely used webmail service. Here’s an explanation:

"This morning, there was a routine maintenance event in one of our European data centers. This typically causes no disruption because accounts are simply served out of another data center. Unexpected side effects of some new code that tries to keep data geographically close to its owner caused another data center in Europe to become overloaded, and that caused cascading problems from one data center to another. It took us about an hour to get it all back under control."

Google is offering a 15-day service credit to Google Apps customers who pay to use Gmail with their domains. As background, here’s some additional information on Google’s European data centers. The company has purchased land in Austria and Finland to expand its data center footprint in Europe, but has made no announcements yet about whether or when it will build in these locations.

Google: Stalled Data Centers Will Be Built

Google (GOOG) isn’t abandoning the data center projects where it has slowed or halted construction due to the slowing economy, the company said this week. In a presentation at the Goldman Sachs Technology and Internet Conference in San Francisco, executives said Google intends to eventually complete a planned facility in Oklahoma, and that recent land purchases reflect long-term planning for an even larger data center network.

In October Google said it will delay construction of a $600 million data center campus in Pryor, Oklahoma that was originally scheduled to be completed in early 2009. The company also reportedly has halted construction work on a second data center building in Lenoir, North Carolina, where Google has already built and commissioned one data center.

“We will build those data centers,” said Alan Eustance, Google’s senior vice president for engineering. “There’s no doubt that over the life of the company we will need that computation. None of those sites have been shelved.”

“All the demand coming our way is relentless,” added chief financial office Patrick Pichette. “It’s not a question of if, it’s a question of when.”

Google Confirms Data Center in Finland

It’s official: Google will build a major data center at a former paper mill in Hamina, Finland, the company said today. Google bought the former Stora Enso newsprint plant for $51 million last month, and said it was “likely” to use the facility for a data center. Today Google posted details about the Hamina project on the data center section of its web site.

Google said it expected to invest 200 million Euros (about $252 million) in the project. That’s a smaller investment than the $600 million the company has announced for U.S. projects in North Carolina, South Carolina and Iowa. It wasn’t immediately clear whether the smaller number reflects a change in scope for Google’s capital expenditures on data centers or was related to the specifics of the Hamina property.

“When fully developed, this facility will be a critical part of our infrastructure for many years to come,” Google said. “Limited testing of the facility should be underway in 2010 and the center should be fully operational later that year.”

How Google Routes Around Outages

Making changes to Google’s search infrastructure is akin to “changing the tires on a car while you’re going at 60 down the freeway,” according Urs Holzle, who oversees the company’s massive data center operations. Google updates its software and systems on an ongoing basis, usually without incident. But not always. On Feb. 24 a bug in the software that manages the location of Google’s data triggered an outage in Gmail, the widely-used webmail component of Google Apps.

Just a few days earlier, Google’s services remained online during a power outage at a third-party data center near Atlanta where Google hosts some of its many servers. Google doesn’t discuss operations of specific data centers. But Holzle, the company’s Senior Vice President of Operations and a Google Fellow, provided an overview of how Google has engineered its system to manage hardware failures and software bugs. Here’s our Q-and-A:

Data Center Knowledge: Google has many data centers and distributed operations. How do Google’s systems detect problems in a specific data center or portion of its network?

Urs Holzle: We have a number of best practices that we suggest to teams for detecting outages. One way is cross monitoring between different instances. Similarly, black-box monitoring can determine if the site is down, while white-box monitoring can help diagnose smaller problems (e.g. a 2-4% loss over several hours). Of course, it’s also important to learn from your mistakes, and after an outage we always run a full postmortem to determine if existing monitoring was able to catch it, and if not, figure out how to catch it next time.

DCK: Is there a central Google network operations center (NOC) that tracks events and coordinates a response?

Urs Holzle: No, we use a distributed model with engineers in multiple time zones. Our various infrastructure teams serve as “problem coordinators” during outages, but this is slightly different than a traditional NOC, as the point of contact may vary based on the nature of the outage. On-call engineers are empowered to pull in additional resources as needed. We also have numerous automated monitoring systems built by various teams for their products, that directly alerts an on-call engineer if anomalous issues are detected.

Efficient UPS Aids Google’s Extreme PUE

Google continues to improve its energy efficiency, and is telling the industry how it’s doing it. After years of secrecy surrounding its data center operations, Google is disclosing many of its innovations today at the first Google Data Center Efficiency Summit in Mountain View, Calif.

In a morning presentation, Google engineers addressed its Power Usage Effectiveness (PUE) ratings, which have generated discussion within the industry since Google’s disclosed in October that its six company-built data centers had an average PUE of 1.21. That benchmark improved to 1.16 in the fourth quarter, and hit 1.15 percent in the first quarter of 2009, according to Google’s Chris Malone. The most efficient individual data center (described as “Data Center E”) has a PUE of 1.12.

“These are standard air-cooled servers, and best practices is what enabled these results,” said Malone. “What’s encouraging is that we’ve achieved this through the application of practices that are available to most data centers. There’s great potential for all data centers to improve their PUE.”

But there’s also some serious Google magic at work. One of the keys to Google’s extraordinary efficiency is its use of a custom server with a power supply that integrates a battery, allowing it to function as an uninterruptible power supply (UPS). The design shifts the UPS and battery backup functions from the data center into the server cabinet (see our February 2008 story describing this technology). This design provides Google with UPS efficiency of 99.9 percent, compared to a top end of 92 to 95 percent for the most efficient facilities using a central UPS.

Google’s Custom Web Server, Revealed

It’s long been known that Google builds its own web servers, enabling it to design the servers for peak performance and energy efficiency. At today’s Google Data Center Energy Summit, the company put one of its custom servers on display. Here’s a brief video of the server, which features a power supply that integrates a battery, allowing it to function as an uninterruptible power supply (UPS). The design shifts the UPS and battery backup functions from the data center into the server cabinet.

Google Unveils Its Container Data Center

Four years after the first reports of server-packed shipping containers lurking in parking garages, Google today confirmed its use of data center containers and provided a group of industry engineers with an overview of how they were implemented in the company’s first data center project in the fall of 2005. “It’s certainly more fun talking about it than keeping it a secret,” said Google’s Jimmy Clidaras, who gave a presentation on the containers at the first Google Data center Efficiency Summit today in Mountain View, Calif.

The Google facility features a “container hanger” filled with 45 containers, with some housed on a second-story balcony. Each shipping container can hold up to 1,160 servers, and uses 250 kilowatts of power, giving the container a power density of more than 780 watts per square foot. Google’s design allows the containers to operate at a temperature of 81 degrees in the cold aisle. Those specs are seen in some advanced designs today, but were rare indeed in 2005 when the facility was built.

Google’s design focused on “power above, water below,” according to Clidaras, and the racks are actually suspended from the ceiling of the container. The below-floor cooling is pumped into the hot aisle through a raised floor, passes through the racks and is returned via a plenum behind the racks. The cooling fans are variable speed and tightly managed, allowing the fans to run at the lowest speed required to cool the rack at that moment.

Inside A Google Data Center

Google will soon be publishing videos of many of the sessions at its Google Data Center Efficiency Summit held Wednesday in Mountain View, Calif. In the meantime, here’s a sneak preview with outtakes from the video tour of a Google data center, which showcased the company’s use of shipping containers to store servers and storage. Each of these 40-foot data center containers can house up to 1,160 servers, and Google has been using them since it built this facility in late 2005. The summit was a Google event for industry engineers and executives to discuss innovations to improve data center efficiency. this video runs about 5 minutes, 45 seconds.

Google’s Data Center Water Treatment Plant

Earlier today we noted the industry’s efforts to reduce the use of water by large cloud computing data centers. Google says two of its newer data centers are now “water self-sufficient, including the company’s new data center in Belgium, which is located next to an industrial canal. Google has built a 20,000 square foot water treatment plant to prepare the canal water for use in its nearby data center. Here’s a video with more information about the water treatment plant, which was first presented at the Google Efficient Data Centers Summit last week in Mountain View.

Microsoft, Google and Data Center Glasnost

One of the best-attended Tuesday sessions at The Uptime Institute’s Symposium 2009 in New York was a presentation by Google’s Chris Malone. As has been noted elsewhere, Malone’s talk summarized much of the information that Google disclosed April 1 at its Data Center Efficiency Summit. But there was a noteworthy moment during the question and answer period when Daniel Costello approached the mike.

Costello is one of the architects of Microsoft’s CBlox data center container strategy. Keep in mind that Microsoft has yet to finish its first containerized facility in Chicago, and Costello had just watched a video documenting Google’s completion of a data center container farm in Sept. 2005, nearly three years before Microsoft announced its project. Would there be tension, or perhaps a debate about the dueling designs?

Google CapEx Continues To Trend Lower

Google (GOOG) spent $263 million on its infrastructure in the first three months of 2008, the lowest quarterly total since it began operating its own data centers, as it continued in cash conservation mode. The first quarter capital expenditure (CapEx) total, which was included in today’s earnings release, was less than a third of the $842 million Google spent on its data centers in the same quarter in 2008.

Google: No Plans Yet for Second S.C. Site

This week Google invited reporters to tour the office area of its new data center in Goose Creek, South Carolina, following up on a similar open house at its facility in Lenoir, North Carolina. The Charleston Post and Courier notes that several employees bring their dogs to the office and hold ping-pong tournaments. “We work really hard, but we play hard too,” said Bill League, data center facilities manager.

While the event focused on community relations, there was one nugget of information: Google told the paper it has no current development plans for its 466-acre site in Blythewood, South Carolina. “We regularly review our resources and customer needs, and will keep the community informed if and when new plans develop,” Google said in a statement. “We expect to continue working with local officials on infrastructure and other aspects of the land acquisition.”

Google Gets Patent for Data Center Barges

The U.S. Patent Office has awarded Google a patent for its proposal for a floating data center that uses the ocean to provide power and cooling. Google’s patent application was filed in Feb. 2007, published in October 2008 and approved on Tuesday (and quickly noted by SEO by the Sea).

The patent application describes floating data centers that would be located 3 to 7 miles from shore, in 50 to 70 meters of water. If perfected, this approach could be used to build 40 megawatt data centers that don’t require real estate or property taxes.

The Google design incorporates wave energy machines (similar to Pelamis Wave Energy Converter units) which use the motion of ocean surface waves to create electricity and can be combined to form “wave farms.” The patent documents describe a cooling system based on sea-powered pumps and seawater-to-freshwater heat exchangers.

Rolling Outage for Google

Many users are experiencing trouble reaching Google today in a rolling outage that is affecting some regions more than others. The troubles were first seen at Google News, which came back online after an outage this morning, apparently to add video links to news searches.

Meanwhile, there are widespread reports on Twitter of trouble reaching other Google services, and even including the home page. The Google Apps status page is acknowledging a “service disruption” for Gmail and says a problem with Google Calendar has been resolved.

UPDATE: Urs Holzle, who oversees the company’s data center operations, has posted an explanation on the official Google blog. ”An error in one of our systems caused us to direct some of our web traffic through Asia, which created a traffic jam,” Holzle wrote. “As a result, about 14% of our users experienced slow services or even interruptions. We’ve been working hard to make our services ultrafast and ‘always on,’ so it’s especially embarrassing when a glitch like this one happens. We’re very sorry that it happened, and you can be sure that we’ll be working even harder to make sure that a similar problem won’t happen again.”

Google Traffic Shifted to NTT During Outage

What happened during Thursday’s performance problems for Google? The company said that “an error in one of our systems caused us to direct some of our web traffic through Asia, which created a traffic jam.” Renesys, which tracks Internet routing, has additional details in a blog post today.

Renesys says traffic was shifted to NTT, whose network received an influx of traffic bound for Google that would normally be routed through Level 3 and/or AT&T. At one point NTT’s network was handling 85 to 90 percent of the traffic bound for Google, according to Renesys. Check out Martin Brown’s analysis for more.

NTT America said that traffic flow may have shifted, but the problems were due entirely to issues at Google. ”NTT’s network was not the cause of Google’s performance problems,” said a spokesperson for NTT America, which maintains the company’s network. “No traffic jam occurred at NTT.”

Google on ‘The Data Center as a Computer’

We’ve written often about Google’s data center operations, which have include innovations such as data center containers and custom servers featuring on-board UPS batteries. Google’s approach to design and innovation are shaped by a vision of the data center that extends beyond traditional thinking. Two of the company’s data center thought leaders, Luiz Andre Barroso and Urs Holzle, have published The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines (PDF), a paper that summarizes the company’s big-picture approach to data center infrastructure.

Google Opens Council Bluffs Data Center

About 650 people attended a ceremony yesterday to mark the “launch” of Google’s new data center in Council Bluffs, Iowa. The $600 million facility is seen as a key economic development win for Iowa, and will eventually result in 200 full-time jobs paying $50,000 a year. “Google has opened up the door to opportunities for us – and Iowa – that we didn’t have in the past,” Council Bluffs Mayor Tom Hanafan said Tuesday, when state and local leaders, workers and about 100 local residents gathered at the new facility. “All of a sudden, companies are looking at Iowa a little differently. Google has put us on the map.”

Google is still hiring workers in Council Bluffs. but local officials noted that the company has room to expand. Google has purchased an additional 1,000 acres of land about four miles from the first phase of its data center project. The company was considering an adjacent 130-acre piece of land for the second phase, but may eventually expand at the larger parcel instead. The purchase doesn’t necessarily mean that Google is expanding the scope of its project, but gives it the space to build additional data centers if needed.

Vint Cerf at the Google Internet Summit

In early May Google hosted the Google Internet Summit 2009 at its Mountain View, Calif. campus. The event brought together thought leaders in Internet infrastructure, with the goal of gathering “a wide range of knowledge to inform Google’s future plans.” This video presents the introductory remarks from Vint Cerf, Google’s chief Internet evangelist and the co-designer of the TCP/IP network protocol. ”The Internet is at an important flex point in its history,” Cerf says. ”Scaling, in many different ways, is still an important issue. Cloud computing is adding another dimension to the way the Internet is being used.” This video runs 9 minutes, but the substance of Cerf’s remarks commece at the 4 minute mark after some opening greetings and housekeeping.

Voldemort Industries

Google is serious about its data centers. But it’s also determined not to take itself too seriously, as evidenced by an anecdote in Oregon Business News, which examines the impact of economy in a Google data center on the economy in The Dalles, Oregon. “As you pull up to the riverfront campus, you’ll spot a Voldemort Industries sign, a self-effacing reference to the Harry Potter character known as ‘He-Who-Must-Not-Be-Named,’” writes The Oregonian.

Okay, perhaps it’s not perfectly aligned with the whole “don’t be evil” motto. The sign is a reference to the use of a code name (Project 02) by local officials during the planning process for the Google data center in The Dalles. The whole cloak-and-dagger business apparently didn’t sit well with some in the community.

“Google officials say they learned from the backlash, and make a point to be transparent when they open data centers,” the story notes. “They have also gotten involved in The Dalles. Workers volunteer at cleanups or Habitat for Humanity; a garden at the edge of the property is public; grants go to community groups. Last fall, Google hosted an open house at its cafeteria and visitor center.”

The Billion Dollar HTML Tag

Can a single HTML tag really make a difference on a corporation’s financial results? It can at Google, according to Marissa Mayer, who says web page loading speed translates directly to the bottom line.

“It’s clear that latency really does matter to users,” said Mayer, the VP of Search and User Experience at Google and today’s keynote speaker at the O’Reilly Velocity Conference. Google found that delays of fractions of a second consistently caused users to search less. As a result, its engineers consistently refine page code to capture split-second improvements in load time.

This phenomenon is best illustrated by a single design tweak to the Google search results page in 2000 that Mayer calls “The Billion Dollar HTML Tag.” Google founders Sergey Brin and Larry Page asked Mayer to assess the impact of adding a column of text ads in the right-hand column of the results page. Could this design, which at the time required an HTML table, be implemented without the slower page load time often associated with tables?

Mayer consulted the W3C HTML specs and found a tag (the “align=right” table attribute) that would allow the right-hand table to load before the search results, adding a revenue stream that has been critical to Google’s financial success.

Managing Megasites: ‘An Insane Amount of Will’

One runs a popular service on just 350 servers, while another likely has more than a million servers. The common denominator: major traffic. Executives from six of the web’s most popular properties – Google, Microsoft, Yahoo, Facebook, MySpace and LinkedIn – shared the stage at Structure 09 yesterday to discuss their infrastructure and innovations.

Managing a megasite requires plenty of hardware. But that’s not the secret sauce, according to Vijay Gill, the Senior Manager of Engineering and Architecture at Google (GOOG). “The key is not the data centers,” said Gill. “Those are just atoms. Any idiot can build atoms and have this vast infrastructure. How you optimize it – those are the hard parts. It takes an insane amount of will.”

The challenges faced by the six sites varied. “I’m taking a minimalist approach,” said Lloyd Taylor, the VP of Technical Operations for the LinkedIn social network. “How little infrastructure can we use to run this? The whole (LinkedIn) site runs on about 350 servers.” That’s due largely to the fact that much of content served by LinkedIn consists of profiles and discussion groups are heavy on text. “We’re not a media intensive site,” said Taylor.

Was Google Busting on Bing?

Last week we wrote about a panel at the Structure 09 conference featuring technologists from the largest Internet sites, including Google’s Vijay Gill and Najam Ahmad from Microsoft. The Register also covered this panel in a story titled Google mocks Bing and the stuff behind it, noting that Gill referenced several data points with the observation that ”If you Bing for it, you can find it.”

Gill, the Senior Manager of Engineering and Architecture at Google, has blogged a response titled Google Does Not Mock Bing.

“I wasn’t mocking Bing when I said ‘Bing for it, you can find it.’” Gill writes. “I meant that seriously, in the spirit of giving props to a competitor, and a good one at that. Najam and I have been friends since before Google had a business plan, and I have the greatest respect for him and for Microsoft as a company.” Gill goes on to compare the different infrastructure approaches for the Google and Bing search products.

Google App Engine Hit By Outage

It’s been a rough week for uptime. Google App Engine has been struggling with performance problems for hours, and appears to be down. The problems began at 6:30 a,m, Pacific time, when App Engine began experiencing high latency and error rates. “All applications accessing the Datastore are affected,” Google said in a notice to developers. Shortly afterward the service went into “unplanned maintenance mode” and began operating as read-only, meaning developers couldn’t update their apps. “Our engineering teams are looking into the root cause of the problem,” Google said.

The App Engine Status Page is currently unavailable. The App Engine team is providing offsite updates via Twitter. UPDATE: At 3:15 Eastern, the status page is back: “Datastore read access has been reenabled and the team expects Datastore write access will be reenabled shortly.”

Google Launches Chrome Operating System

Google confirmed late Tuesday that it is launching a PC operating system based on its Chrome web browser. “Google Chrome OS is an open source, lightweight operating system that will initially be targeted at netbooks,” Google’s Sundar Pichai wrote on the Google blog. “Later this year we will open-source its code, and netbooks running Google Chrome OS will be available for consumers in the second half of 2010.”

“Google Chrome OS will run on both x86 as well as ARM chips and we are working with multiple OEMs to bring a number of netbooks to market next year,” Pichai added. “The software architecture is simple — Google Chrome running within a new windowing system on top of a Linux kernel. For application developers, the web is the platform. All web-based applications will automatically work and new applications can be written using your favorite web technologies.”

The New York Times has early coverage in a story for Wednesday’s paper, which aniticpated the browser being unveiled in a Google blog post Wednesday afternoon (makes me wonder if Google published early once the Times story was available online).

Google’s Chiller-less Data Center

Google (GOOG) has begun operating a data center in Belgium that has no chillers to support its cooling systems, a strategy that will improve its energy efficiency while making local weather forecasting a larger factor in its data center management.

Chillers, which are used to refrigerate water, are widely used in data center cooling systems but require a large amount of electricity to operate. With the growing focus on power costs, many data centers are reducing their reliance on chillers to improve the energy efficiency of their facilities.

This has boosted adoption of “free cooling,” the use of fresh air from outside the data center to support the cooling systems. This approach allows data centers to use outside air when the temperature is cool, while falling back on chillers on warmer days.

Google has taken the strategy to the next level. Rather than using chillers part-time, the company has eliminated them entirely in its data center near Saint-Ghislain, Belgium, which began operating in late 2008 and also features an on-site water purification facility that allows it to use water from a nearby industrial canal rather than a municipal water utility.

Year-Round Free Cooling
The climate in Belgium will support free cooling almost year-round, according to Google engineers, with temperatures rising above the acceptable range for free cooling about seven days per year on average. The average temperature in Brussels during summer reaches 66 to 71 degrees, while Google maintains its data centers at temperatures above 80 degrees.

So what happens if the weather gets hot? On those days, Google says it will turn off equipment as needed in Belgium and shift computing load to other data centers. This approach is made possible by the scope of the company’s global network of data centers, which provide the ability to shift an entire data center’s workload to other facilities.

Google Slashes CapEx Even Further

Google (GOOG) spent just $139 million on its data centers in the second quarter of 2009, as it further squeezed capital spending that was already at historic lows. On one level, the company’s ability to reduce infrastructure spending reflects well on Google’s capacity planning during its 2007 building boom, when it announced four new data centers. It’s also part of a broader effort by Google to reduce spending uring the current economic slowdown.

The second quarter capital expenditure (CapEx) total was dramatically lower than the company spent during the same period in 2006 ($690 million), 2007 ($597 million) and 2008 ($698 million).

Google’s PUE Numbers Slip Slightly

Many companies have released information about their data center energy effiency using the Power Usage Effectiveness (PUE) metric. But none have received more scrutiny than Google, whose exceptional PUE numbers have been widely discussed in the data center industry. The latest data from Google show that the company’s numbers can go up as well as down.

Google’s quarterly energy-weighted average PUE rose to 1.20 in the second quarter, up from 1.19 in the the first quarter of 2009. The best-performing facility had a PUE of 1.15 for the quarter, compared to 1.11 for the most efficient site in Q1.

The Apple-Google Data Center Corridor

Google and Apple may be having their tensions at the boardroom level, as seen in this week’s news that Google CEO Eric Schmidt will resign as a director of Apple. But the two technology giants are aligned in another area: the merits of western North Carolina as a haven for massive Internet data centers.

Apple’s planned $1 billion data center in Maiden, North Carolina is just 25 miles from a huge Google data center complex in Lenoir. The proximity is not an accident, as the Google project in Caldwell County prompted economic development officials in nearby towns to begin pursuing data center development.

That included Catawba County, where economic development director Scott Millar began a concerted effort to attract data centers, a process that culminated in Apple’s decision to build in Maiden. Millar believes the presence of the technology sector’s two marquee names will attract additional projects, establishing the region as a major data center destination.

“I think that from an investment standpoint, now every CIO in the country is forced to look at the merits of the Apple/Google Corridor,” said Millar. “We’re not going to quit here.”

How Google Manages Data Across Data Centers

How does Google manage data across the many data centers the company operates? At the Google i/o Developer Conference in May, Google’s Ryan Barrett discusses “multihoming” across multiple data center locations with large scale storage systems. Barrett’s presentation examines different approaches to multi-homing, with a particular focus on how Google engineers the App Engine datastore. This video runs about 1 hour. (Link via High Scalability, which has additional commentary).

Yes, Gmail Was Down

Google’s Gmail is experiencing an outage this afternoon. What’s remarkable is the volume of commenting and Tweeting (looks like about 1,000 per minute) with users remarking/complaining about the outage, a reminder of how essential Gmail has become for many users.

“We’re aware that people are having trouble accessing Gmail,” Google said on its Twitter account. “We’re working on fixing it.” A post on the Gmail blog added that “if you have IMAP or P0OP set up already, you should be able to access your mail that way in the meantime.” UPDATE: As of 5:15 pm Eastern time, it appears the Gmail web interface is back up for most users.

Gmail also had an extended outage in February when a code change triggered “cascading problems” that overloaded a data center in Europe. For more background on that outage, aliong with a broader overview of Google’s incident management, see our interview with Google senior VP of operations Urs Hoelzle (How Google Routes Around Outages).

Router Ripples Cited in GMail Outage

Google has published an update on this afternoon’s Gmail downtime. “Today’s outage was a Big Deal, and we’re treating it as such,” writes Ben Treynor, Google’s VP of Engineering and Site Reliability Czar. “We’re currently compiling a list of things we intend to fix or improve as a result of the investigation.”

The problem? Treynor says Google underestimated the load that routine maintenance on “a small fraction” of Gmail servers would place on the routers supporting the application. “At about 12:30 pm Pacific a few of the request routers became overloaded and in effect told the rest of the system ’stop sending us traffic, we’re too slow!’,” Treynor wrote. “This transferred the load onto the remaining request routers, causing a few more of them to also become overloaded, and within minutes nearly all of the request routers were overloaded.”

All Quiet at Google’s Oklahoma Data Center

Amazon isn’t the only cloud builder that has idled a data center project, as the recession has led Google to scale back its capital investments in data center infrastructure. The most visible sign of this is the postponement of the company’s planned data center project in Pryor, Oklahoma. The $600 million facility was scheduled to be completed in early 2009, but last fall the company said it would be delayed and may come online sometime in 2010.

A reader recently drove by the Google site at the Mid-American Industrial Park in Pryor and took some photos of the site. There’s no construction activity at present, and the site appears unchanged from the date of the stoppage. At that time, Google had already done some work on the existing structure that will serve as the first data center at the location, where Google owns 800 acres of land for future expansion.

Facebook, Google Ready for Faster Ethernet

As Facebook was announcing that it has reached 300 million users last week, network engineer Donn Lee was making the case for faster Ethernet networks – definitely 100-Gigabit Ethernet, and perhaps even Terabit Ethernet.

It is “not unreasonable to think Facebook will need its data center backbone fabric to grow to 64 terabits a second total capacity by the end of next year,” Lee told attendees of a seminar hosted by the Ethernet Alliance, an industry consortium focused on the expansion of Ethernet technology.

10 Gigabit Ethernet is currently the fastest rate available, although higher capacity can be achieved by aggregating 10 Gigabit links. The Ethernet Alliance event last week brought together networking developers working on the IEEE 802.3ba standard for 40/100 Gigabit Ethernet. Presentations from Facebook’s Lee and Google network architect Bikash Koley offered a deep dive into network technologies, architecture and the future of Ethernet standards. The next phase of development envisioned is Terabit Ethernet, although some stakeholders see 400 Gigabit Ethernet as a likely interim step.

1 Billion Page Views A Day for YouTube

Here’s a milestone you don’t see every day: YouTube announced this morning that it is serving more than a billion page views a day. That works out to 11,574 page views per second, and means that YouTube serves up a million page views about every 90 seconds.

“This is great moment in our short history,” YouTube CEO and co-founder Chad Hurley wrote. What’s the broader significance? Hurley notes that “clip culture is here to stay,” and it’s a trend with huge ramifications for the Internet infrastructure industry. The Internet won’t replace the TV overnight, but entertainment consumption patterns are changing rapidly. As this shift accelerates, it will drive demand for more servers, storage, bandwith and content delivery technology to make it all work smoothly.

Vint Cerf on the Future of the Internet

What does “Father of the Internet” believe lies ahead for the Internet? Vint Cerf wrote the software that connected the first servers on the Internet. As Google’s Chief Internet Evangelist, Cerf remains actively engaged in discussions about the network and its future. In this July 2009 lecture at Singularity University, Cerf gives a comprehensive overview of the state of the Internet today, and shares his thoughts on the importance of IPv6, how mobile computing will impact the Internet, the need for cloud computing standards and the growing Asian prominence online. “The Asian population will the dominant user population of the Internet,” Cerf says. “There’s no doubt about that.”

Google Efficiency Update: PUE of 1.22

Google’s latest quarterly update of the energy efficiency of its data centers showed a slight uptick from the previous quarter, but improvement from the same period last year. The company publishes its efficiency data using Power Usage Effectiveness (PUE), the leading metric for “green” data centers.

“Overall, our fleet QoQ results were as expected,” Google reported. “The Q3 total quarterly energy-weighted average PUE of 1.22 was higher than the Q2 result of 1.20 due to expected seasonal effects. The trailing twelve-month (TTM) energy-weighted average PUE remained constant at 1.19. YoY performance improved from facility tuning and continued application of best practices. The quarterly energy-weighted average PUE improved from 1.23 in Q3′08, and the TTM PUE improved from 1.21.”

YouTube’s Bandwidth: Cheap, But Not Free

How much is YouTube paying for the bandwidth to deliver its 1 billion page views per day? Credit Suisse says $470 million a year. RampRate says $174 million. Google says “less than you think.” Now Wired.com asserts that YouTube’s bandwidth bill is zero, citing an analysis by Arbor Networks. The gist of the report is that YouTube has slashed its video delivery costs through the use of peering relationships and its in-house GoogleNet connecting its data centers (assembled through the company’s oft-reported purchases of dark fiber).

Can Google really be paying nothing to deliver video? Dan Rayburn from the Business of Online Videosays Wired has misinterpreted the statment by Arbor Networks’ Craig Labovitz that ”Google’s transit costs are close to zero.”

“Transit costs are not the same as bandwidth costs and Wired should know that,” Rayburn writes. He also says that although Google can cut its costs by peering with large ISPs, it’s not likely to strike similar deals with smaller providers.

Google CapEx Spending Rebounds Slightly

Google (GOOG) invested $186 million on its data centers in the third quarter of 2009, ending a string of five consectuive quarters in which the company slashed its capital spending. But Google’s infrastructure costs remained just a fraction of what they were during its 2007 building boom, when it announced four new data centers.

The second quarter capital expenditure (CapEx) total was up slightly from $139 million in the second quarter but tically lower than the company spent during the same period in 2006 ($492 million), 2007 ($453 million) and 2008 ($452 million). Here’s a look at the recent trend:

Google Envisions 10 Million Servers

Google never says how many servers are running in its data centers. But a recent presentation by a Google engineer shows that the company is preparing to manage as many as 10 million servers in the future.

Google’s Jeff Dean was one of the keynote speakers at an ACM workshop on large-scale computing systems, and discussed some of the technical details of the company’s mighty infrastructure, which is spread across dozens of data centers around the world.

In his presentation (link via James Hamilton), Dean also discussed a new storage and computation system called Spanner, which will seek to automate management of Google services across multiple data centers. That includes automated allocation of resources across “entire fleets of machines.”

hi5 Expands at 365 Main Oakland Facility

Fast-growing social network hi5 will expand its data center footprint by taking space in 365 Main’s new facility in Oakland, Calif., the two companies said today. San Francisco-based hi5 has been a tenant at 365 Main’s flagship San Francisco data center since 2004. The expansion is the latest indicator of the growing importance of social networks as major users of data center space, and follows major infrastructure expansions by MySpace and Facebook.

hi5 was founded in 2003 and is now the 8th most-trafficked website in the world according to Alexa. The company will lease an additional 2,500 square feet of space at the Oakland data center, which opened in September 2007.

Power Savings Free Up Space at 365 Main

Colocation provider 365 Main says it has space available in its flagship data center in San Francisco for the first time since late 2006. The new capacity is a result of several large customers “graduating” to the company’s new facility across the bay in Oakland, and energy savings created by server consolidations by several customers.

San Francisco has historically been one of the strongest markets for colocation space, according to 365 Main VP of marketing Miles Kelly. “We’re in deep conversations with existing customers to take some of that capacity,” said Kelly. “We expect it to go quickly. San Francisco is a tight data center market.”

Two large customers have relocated to 365 Main’s Oakland data center, which opened in early 2007 and has more than 80,000 square feet of raised-floor technical space. Kelly wouldn’t say which customers, but previous announcements identify blog hosting provider Six Apart and social network Hi5 as customers that have migrated from San Francisco to Oakland.

Quake-Proofing An Entire Data CenterHow do you engineer a data center for high availability in an earthquake zone? That’s a special challenge in San F

How do you engineer a data center for high availability in an earthquake zone? That’s a special challenge in San Francisco, which is home to a vibrant community of Internet businesses but experienced widespread devastation from a 1906 quake and sustained another big hit in the 1989 Loma Prieta (”World Series”) earthquake.

365 Main houses mission-critical equipment for more than 200 companies in its data center nestled alongside the Bay Bridge. When original owner AboveNet was converting the former tank turret factory for data center use in 2000, it confronted the earthquake risk, installing a base isolation designed to keep the entire building stable when the earth moves.

“This is one of the safest buildings in San Francisco,” said Miles Kelly, senior VP of marketing for 365 Main.

Many data center companies use rack-level earthquake isolation units, which are installed under racks and cabinets and employ a ball-and-cone system to allow the equipment to gently roll back and forth during an earthquake. Providing earthquake protection at the building level involves similar concepts, but a lot more engineering.

Google's Data Center secrets revealed!

After years of secrecy (maybe because they thought no one was interested), Google held its "Data Center Efficiency Summit" yesterday, where the company showed off one of its DCs and custom web servers -- all in a bid to evangelize for energy efficiency. The green angle means that everything has been planned for optimum power use, from the 1AAA shipping containers (sporting over a thousand servers each) that make up the core of its operations, to the servers themselves -- each containing its own 12-volt UPS. This design is said to boast a staggering 99.9 percent energy efficiency, as opposed to a standard centralized UPS setup which at best would only score 95 percent. According to CNET, these are efficiency levels that the EPA doesn't envision as practical until at least 2011. But that ain't all -- hit that read link for the whole sordid affair, but not before you check out the video of a server itself after the break.