Wednesday, November 11, 2009

Inside A Google Data Center

Google provided a look inside its data center operations at the Google Data Center Efficiency Summit held Wednesday in Mountain View, Calif. The presentations included a video tour of a Google data, Click Here to see the video.

crac data center

The size of your data center and the number of servers justifies an uncompromising design. Computer Room Air Conditioners (CRACs) are precisely what the name implies: air conditioners designed specifically for the needs and demands of the computer room. I'm assuming from the wording of your question that the roof-mounted Air Handling Units (AHUs) that you are also considering are conventional units designed for the standard office environment, and probably run off the building central system. A data center should be as independent as possible.

It is unlikely that standard rooftop AHUs will maintain the close tolerance temperature and humidity control you should have in a high-availability data center environment, which it appears you intend to have if you are staffing the NOC 24x7. (Incidentally, the NOC should be provided with a normal office environment to make it more comfortable and controllable for the occupants.) Standard AHUs are normally designed to handle more latent heat (evaporation of moisture from human activity), whereas CRACs are designed to humidify while also handling mostly sensible heat (the dry heat you feel or sense coming off of computer hardware).

You don't identify where in the country you are located, but if you are in the North, roof-mounted AHUs can present some operational and maintenance problems in deep winter, and if you're in the South, they may not provide the level of humidity control you need. With today's concerns about energy efficiency, you will probably also find CRACs to be more economical in the long run, particularly if you are in a part of the country where you can take advantage of winter temperatures to utilize "free cooling."

The raised floor question is one that is widely debated these days. I still prefer a raised floor in most situations, particularly if I have enough building height to utilize the floor for air delivery. (That means 18-inches minimum, and preferably 24-30 inches. It also means controlling obstructions under the floor, including piping, power and cable.) With roof-mounted AHUs, you're not going to deliver the air to the floor, so it becomes a matter of personal preference and budget. I still like to have power and permanent cable infrastructure under the floor if I can, but others have different opinions.

If you can't make a raised floor high enough to use it for efficient air delivery, then, whether you use roof-mounted AHUs or CRACs, you will be delivering air from overhead. This can certainly be done, and can be done well, but it requires more design than simply blowing cold air into the room. Warm air rises, so dumping cold air in from above in the closely-coupled "Hot Aisle/Cold Aisle" design of a data center essentially contradicts the laws of physics since the warm air will rise and mix with the cold. Either solution will require well-designed ducting to cool and operate efficiently. Therefore, this is probably easier to do, and certainly less space consuming, with roof-mounted AHUs for which the return is already at the ceiling. On "Top Blow" CRACs the return air intake is at the lower front or back of the unit, which presents greater duct design problems. In my opinion, however, unless other factors preclude it, I would opt for CRACs in an important facility every time.

If I'm interpreting your question correctly, you are asking if you should use enclosures with fans mounted on the rear doors, blowing into the hot aisles. I consider most of the "fan boosted" solutions to be means of addressing problems caused by a poor basic cooling design. I say "most" because there are cabinets that are truly engineered to support higher density loads than can be achieved with high-flow cabinet doors alone, even in a well designed facility. But these cabinets generally duct the hot air to an isolated return path – usually a plenum ceiling – so they are solving more than just an air flow problem; they are also preventing re-circulation of hot air, which in itself makes a big difference. Remember, however, that fans will try to pull air out of the floor or cold aisle in the quantity they were designed for, and this may air-starve other equipment farther down the row. (Variable speed fan control helps, but if heat load is high the fans will still try to pull maximum air.) A data center is a complete "system," and you can't just insert a "solution" into the middle of it without affecting things elsewhere. There is simply no "magic bullet" for the problem of high-density cooling.

chiller data center

A data center chiller is a cooling system used in a data center to remove heat from one element and deposit it into another element. Chillers are used by industrial facilities to cool the water used in their heating, ventilation and air-conditioning (HVAC) units. Round-the-clock operation of chillers is crucial to data center operation, given the considerable heat produced by many servers operating in close proximity to one another. Without them, temperatures would quickly rise to levels that would corrupt mission-critical data and destroy hardware.

The development of powerful chillers and associated computer room air conditioning (CRAC) units has allowed modern data centers to install highly concentrated server clusters, particularly racks of blade servers. Like many consumer and industrial air conditioners, however, chillers consume immense amounts of electricity and require dedicated power supplies and significant portions of annual energy budgets. In fact, chillers typically consume the largest percentage of a data center's electricity.

Manufacturers also have to account for extreme conditions and variability in cooling loads. This requirement has resulted in chillers that are often oversized, leading to inefficient operation. Chillers require a source of water, preferably already cooled to reduce the energy involved in lowering its temperature further. This water, after absorbing the heat from the computers, is cycled through an external cooling tower, allowing the heat to dissipate. Proximity to cold water sources has led to many major new data centers being sited along rivers in colder climates, such as the Pacific Northwest. The chillers themselves, along with integrated heat exchangers, are located outside of the data center, usually on rooftops or side lots.

Manufacturers have approached next-generation chiller design in a number of ways. For large-scale systems, bearingless designs significantly improve power utilization, given that the majority of chiller inefficiency results from energy lost through friction in the bearings. Smaller systems use SMART technologies to rapidly turns a chiller's compressor on and off, letting it work efficiently at from 10% to 100% of capacity, depending on the workload. IBM's "Cool Battery" technology employs a chemical reaction to store cold.

To maintain uptime, data center managers have to ensure that chillers have an independent generator if a local power grid fails. Without a chiller, the rest of the system will simply blow hot air. While any well-prepared data center has backup generators to support servers and other systems if external power supplies fail, managers installing UPS and HVAC systems must also determine whether a facility provides emergency power to the chiller itself. Data center designers, for this reason, often include connections for an emergency chiller to be hooked up. Multiple, smaller chillers supplied with independent power supplies generally offer the best balance of redundancy and efficiency, along with effective disaster recovery preparation. As recent major outages at hosting providers like Rackspace have demonstrated, however, once knocked offline, chillers may take too long to cycle back up to protect data centers, during which time servers can quickly overheat and automatically shut down.

data center economizer

An economizer is a mechanical device used to reduce energy consumption. Economizers recycle energy produced within a system or leverage environmental temperature differences to achieve efficiency improvements.

Economizers are commonly used in data centers to complement or replace cooling devices like computer room air conditioners (CRACs) or chillers. Data center economizers generally have one or more sets of filters to catch particulates that might harm hardware. These filters are installed in the duct work connecting an outside environment to a data center. Outside air also must be monitored and conditioned for the appropriate humidity levels, between 40% and 55% relative humidity, according to ASHRAE.

There are two versions of the device used in data centers: air-side economizers and water-side economizers.

* Airside economizers pull cooler outside air directly into a facility to prevent overheating.
* Water-side economizers use cold air to cool an exterior water tower. The chilled water from the tower is then used in the air conditioners inside the data center instead of mechanically-chilled water, reducing energy costs. Water-side economizers often operate during nightime to take advantage of cooler ambient temperatures.

Economizers can save data center operators substantial operating costs. According to GreenerComputing.org, economization has the potential to reduce the annual cost of a data center's energy consumption by more than 60 percent. Use of cooler external environmental temperatures to preserve hardware is an important component in sustainable green computing practices in general. Unfortunately, economizers are only useful for data centers located in cooler climates.

data center 2009

Data Center 2009 Conference series offer fresh guidance on how to turn today's improvements in IT infrastructure and process efficiency into tomorrow's business advantage.

This MUST ATTEND Conference & Showcasing will strive to provide a comprehensive agenda for those in the data center domains gain a deep insight on managing the technological and business aspects. Topics will be built around critical factors such as people, technologies, processes, and data center facilities, value proposition of the business, and how to effectively manage the impending transitions.

cisco data center

Most enterprises have been exploring cloud computing to see how it might work for them. Cloud computing offers the ability to run servers on the Internet on demand. The storage, compute, and network functions are positioned and ready for use, so servers can be deployed within minutes, and paid for only for as long as they are in use.

An essential component of any cloud installation is its network. When servers are deployed in a cloud, they need an external network to be usable. The network services that they need are more than simple IP connectivity, and each customer of the cloud will need some customization. Here are some key types of cloud network service.

data center virtualization

Cisco Data Center 3.0 comprises a comprehensive portfolio of virtualization technologies and services that bring network, compute/storage, and virtualization platforms closer together to provide unparalleled flexibility, visibility, and policy enforcement within virtualized data centers:

* Cisco Unified Computing System unifies network, compute, and virtualization resources into a single system that delivers end-to-end optimization for virtualized environments while retaining the ability to support traditional OS and application stacks in physical environments.
* VN-Link technologies, including the Nexus 1000V virtual switch for VMware ESX, deliver consistent per-virtual-machine visibility and policy control for SAN, LAN, and unified fabric.
* Virtual SAN, virtual device contents, and unified fabric help converge multiple virtual networks to simplify and reduce data center infrastructure and total cost of ownership (TCO).
* Flexible networking options to support all server form factors and vendors, including options for integrated Ethernet and Fibre Channel switches for Dell, IBM, and HP blade servers, provide a consistent set of services across the data center to reduce operational complexity.
* Network-embedded virtualized application networking services allow consolidation of remote IT assets into virtualized data centers.

vmware data center

Microsoft archrival VMware announced this week a set of future technologies called the “Virtual Datacenter OS” that some are comparing with Windows Server 2008.

But Windows Server 2008 and its integrated Hyper-V hypervisor aren’t what Microsoft is going to be pitting against VMware, Google and Amazon in the brave, new datacenter-centric world. Instead, Microsoft’s soon-to-be-unveiled “Zurich” foundational services and its “RedDog” system services are what the Redmondians will be fielding against its cloud competitors.

Not so different, are they? (VMware, like Microsoft, is talking about spanning both “on-premise” and “cloud” datacenters. Not too surprising when you remember from where VMware CEO Paul Maritz hails….)

Microsoft is expected to detail at least the mid-tier — the Zurich services — at its Professional Developers Conference in late October. I’m hearing most at least some of Microsoft’s Zurich deliverables are slated to be released in final form by mid-2009.

At the base level, down at the substrate which Microsoft previously has described as its “Global Foundation Services” layer is where I’m betting RedDog will fit in. These are services like provisioning, networking, security, management, and operations. (Virtualization fits in here, as well, in the context of helping users migrate between the cloud and on-premise and vice versa.) RedDog has been described as the horizontal “cloud OS” that Microsoft is building to power datacenters.

At the next level, “Zurich” — which Microsoft also has described as the Live Platform services layer — Microsoft will deliver federated identity, data synchronization, workflow and “relay services.” I’ve been hearing a bit more lately about Relay, which I believe Microsoft also has called “Overlay” services.

Overlay is a peer-to-peer network that will help bridge distributed, parallel systems. Supposedly, Overlay will help Microsoft do everything from load balancing and replication of application states across the network of machines, to providing a discovery framework for Web services to make use of presence and federation services. Elements of the overlay network are expected to be part of .Net 4.0, the next version of Microsoft’s .Net Framework.

Bottom line: Don’t think Windows Server 2008 is the be-all/end-all of Microsoft’s datacenter OS story. There is lots more that the company still won’t discuss publicly, but is known to be happening in the background.

disaster recovery data center

We’re starting to see some interesting case studies for servers in shipping containers. In a profile of Revlon CTO David Giambruno, ComputerWorld has some details on Revlon’s use of distributed data center containers in its disaster recovery network. Here’s an excerpt:

“Rather than have parallel datacenters and SANs in various countries, Giambruno’s plan put high-capacity storage at five sites across the world, consolidating data and applications at its U.S. datacenter. Using the same shipping system as for its cosmetics manufacturing, Revlon sent out five pre-loaded “Mini Me” datacenter containers to its four other IT centers, creating a global disaster recovery network of identical systems that assured resources would work when moved. These Mini Me datacenters have the SAN, storage, and severs for both local operations and can support external fail-over from other locations if needed.

Revlon says this approach reduced its datacenter power consumption by 72 percent asnd cut disaster recovery costs in half, as well as dramatically reducing the time required to back up 6.5 terabytes of data each week. Read ComputerWorld for more.

data center energy efficiency

EPA ENERGY STAR Program released to Congress a report assessing opportunities for energy efficiency improvements for government and commercial computer servers and data centers in the United States.

The study projects near-term growth in energy use of U.S. computer servers and data centers, assesses potential cost and energy savings related to computer server and data center energy efficiency improvements, and recommends potential incentives and voluntary programs to promote energy-efficient computer servers and data centers. The report complements EPA’s ongoing efforts to develop new energy efficiency specifications for data servers, including market and technical research, industry collaboration, and explorations into a new ENERGY STAR buildings benchmark for data centers which reflects whole building operations.

The report recommends a mix of programs and incentives, as well as a holistic approach to achieve significant savings. Recommendations include:

* Collaborating with industry and other stakeholders on the development of a standardized whole-building performance rating system for data centers.
* Federal leadership through implementation of best practices for its own facilities.
* Development of ENERGY STAR specifications for servers and related product categories.
* Encouraging electric utilities to offer financial incentives to facilitate datacenter energy efficiency improvements.
* The Federal government should partner with industry to issue a CEO Challenge, for private-sector CEOs to assess and improve the energy efficiency of their data centers.
* Public/private research and development into advanced technology and practices including computing software, IT hardware, power conversion, heat removal, controls and management, and cross-cutting activities.

national data centre pune

National Data Centre (NDC) is the sole custodian of all Meteorological Data being collected from various parts of India. The data are available for more then 125 years. The mandate is to preserve quality controlled data and supply for Weather Prediction, Aviation, Agriculture, Environmental studies, Oceanography and Shipping and Researchers of various Institutions and Universities.

google data api

Google's mission is to organize the world's information and make it universally accessible and useful. This includes making information accessible in contexts other than a web browser and accessible to services outside of Google. As an end-user or a developer, you are the owner of your information, and we want to give you the best tools possible to access, manipulate, and obtain that information.
Why AtomPub?

Web syndication is an effective and popular method for providing and aggregating content. The Google Data Protocol extends AtomPub as a way to expand the types of content available through syndication. In particular, the protocol lets you use the AtomPub syndication mechanism to send queries and receive query results. It also lets you send data to Google and update data that Google maintains.
Why JSON?

JSON, or JavaScript Object notation, is a lightweight data interchange format in widespread use among web developers. JSON is easy to read and write; you can parse it using any programming language, and its structures map directly to data structures used in most programming languages. Within the Google Data Protocol, JSON objects simply mirror the Atom representation.

microsoft data center

Though the building alone covers a whopping 11 acres, you can't even see Microsoft's new $550 million data center in the hills west of San Antonio until you're practically on top of it. But by that point, you can hardly see anything else.

A "spine" of wires and pipes supplies power, cooling and other vital resources throughout Microsoft's Chicago data center, which is under construction.
(click for image gallery)

These days, the massive data center is a bustling construction zone where visitors have to wear hardhats, helmets, orange safety vests, goggles and gloves. By September, it'll be the newest star in Microsoft's rapidly expanding collection of massive data centers, powering Microsoft's forays into cloud computing like Live Mesh and Exchange Online, among plenty of other as-yet-unannounced services. Pulling in, visitors are stopped by Securitas guards who check IDs and ask if they work for Microsoft. An incomplete gate marks the way. Microsoft's general manager of data center services, Mike Manos, won't say exactly what security measures will be in place when the data center opens, but won't rule anything out. "Will the gates be able to stop a speeding Mack truck?" I ask. "Or more," he responds. "Will you have biometrics?" "We have just about everything."

As the car rounds the bend beyond the gate, the building sweeps into full view. The San Antonio data center building itself is 475,000 square feet, or about 11 acres. It's a 1.3 mile walk to circumnavigate the building. To get a perspective on that, it's one building that's the size of almost 10 football fields laid out side-by-side, or 1/10th the floor space of the entire Sears Tower, covered with servers and electrical equipment. "I thought I understood what scale looked like," Manos says.

When the San Antonio data center was under peak construction, 965 people were working full time to build it, with more than 15 trucks of material coming and going each day in order to get the job done in 18 months from scouting the site to opening up. The facilities were built with continuous workflow of materials in mind, even after the site's completion.

As one walks toward the data center's main entrance, a feature that stands out is a row of several truck bays much like would be seen in an industrial park. Trucks pull up and leave servers or other materials inside the bays or "truck tracks," to be picked up and inventoried in the next room and then moved to storage or deployment.

Most everything in the data center is functional. On the small scale, wainscoting-like pieces of plywood cover the bottom of hallway walls to protect both the walls and servers and other equipment moving back and forth. On the large scale, San Antonio is actually two data centers side by side to separate business risk. "One side could burn down and the other one could continue to operate," Manos says.

The components inside are just as gargantuan as those on the outside. Seven massive battery rooms contain hundreds of batteries and 2.7 mW of back-up power apiece. Very few industrial sites, among them aluminum smelters, silicon manufacturers and automobile factories, consume as much energy as mega data centers of the order Microsoft is building.

yahoo data center

Companies like Google and IBM are trying to lead the world in cutting-edge, efficient data centers. Not to be outdone, on Tuesday Yahoo! announced they're hoping to change to future of data centers as well. The company unveiled plans to build one of the world's most efficient data centers in Lockport, NY, and the details do sound pretty exciting.

The data center will be powered mainly by hydroelectric power from Niagara Falls, with 90 percent of that energy going towards powering the servers. The center itself will be built to resemble a chicken coop, using 100 percent outside air to cool the servers, a task which typically gobbles up 50 percent of a data center's energy supply. And the company expects the yearly PUE average to be 1.1 or better.

In addition to building this super-efficient data center, the company also committed to reducing the carbon footprint of all their data centers by 40 percent by 2014. They intend to accomplish this through using more renewable energy sources to power their data centers, implementing more efficient building designs and improving the efficiency of the servers themselves.

Another major commitment made in this announcement was that the company would cease purchasing carbon offsets and was aiming to reduce their carbon impact directly through decreasing energy consumption. We would love to hear of more companies relying less on offsets and more on energy-saving improvements.

google floating data center

Google Inc., which has been building out its data center inventory for the past few years, is literally floating its latest idea for the location of such facilities with the U.S. Patent and Trademark Office.

The company filed a patent application for a "water-based data center" detailing a floating data center, complete with an energy supply fed by a wave-powered generator system and a wind-powered cooling system using seawater.

The patent application, published Aug. 28, describes a modular setup that calls for "crane-removable modules" that store racks of computers. The modules would facilitate adding, subtracting and moving the computing power.

The patent application also details tapping water motion to generate power and the ability to configure the system in many different ways, including on-ship and on-shore data centers, various cooling mechanisms, backup systems, and even temporary housing and helicopter pads to support IT maintenance staffers.

Google is not the first to consider alternatives to the power-sucking data centers that it and others are constructing around the globe, to suggest unique locations or to tap the sea for innovative IT ideas.

Both Google and Microsoft Corp. are already using hydroelectric power options in the Northwest.

A couple in Nebraska that lives underground in a 1960s-era Atlas E Missile Silo wants to turn 15,000 square feet of their bunker into a highly secure data center.

And a company called SeaCode Inc. a few years ago proposed Hybrid-Sourcing, a venture that loads a fully staffed luxury liner with software engineers to get around H-1B visa restrictions and provide U.S. businesses with high-end tech workers.

Google officials say there is nothing to announce now regarding its water-based data center idea.

"We file patent applications on a variety of ideas that our employees come up with. Some of those ideas later mature into real products, services or infrastructure; some don't. We do a lot to make our infrastructure scalable and cost-efficient," a company spokesman said in response to an e-mail.

The idea, however, is fully outlined in the patent application.

Google says computing units could be mounted in shipping containers, which could be stored on ships or floating platforms and loaded and unloaded via equipment already used in shipping ports.

The computers in the containers, or "modules," could easily be replaced or updated as technology advances and adverse sea conditions exact their toll.

Proposed configurations include putting the modules on land next to a body of water.

Water is key for generating power, according to the patent, which cites the use of Pelamis machines and other devices such as wind generators to create energy.

The Pelamis machines use a series of hydraulics powered by water motion to drive motors connected to electrical generators. Other devices, such as a floating power-generation apparatus, use tethers and a spring-loaded hub to gather power from the rise and fall of water levels.

google ocean data center

Google Maps (GM) on the web and Google Earth (GE) as a 3D interactive atlas software application are ideal tools for sharing geographical information in a simple way.
GE as a mass-market visualization product is definitely a new step in the evolution of mapping and GIS, especially in the way it can be used with a couple of mouse clicks by anybody not expert in cartography.

So this webpage is the MIS contribution to this unique method of information gathering to share detailed information in the marine domain waiting Google 3D Maps for Oceans (may be with Google Oceans as a product name, but name can change on its launch -probably on the 2nd of February, 2009-), the future release of the Google project under research to create visualization tools for marine data throughout the world and will enable users to navigate below the sea surface.

google data center pue

oogle continues to improve its energy efficiency, and is telling the industry how it’s doing it. After years of secrecy surrounding its data center operations, Google is disclosing many of its innovations today at the first Google Data Center Efficiency Summit in Mountain View, Calif.

In a morning presentation, Google engineers addressed its Power Usage Effectiveness (PUE) ratings, which have generated discussion within the industry since Google’s disclosed in October that its six company-built data centers had an average PUE of 1.21. That benchmark improved to 1.16 in the fourth quarter, and hit 1.15 percent in the first quarter of 2009, according to Google’s Chris Malone. The most efficient individual data center (described as “Data Center E”) has a PUE of 1.12.

“These are standard air-cooled servers, and best practices is what enabled these results,” said Malone. “What’s encouraging is that we’ve achieved this through the application of practices that are available to most data centers. There’s great potential for all data centers to improve their PUE.”

But there’s also some serious Google magic at work. One of the keys to Google’s extraordinary efficiency is its use of a custom server with a power supply that integrates a battery, allowing it to function as an uninterruptible power supply (UPS). The design shifts the UPS and battery backup functions from the data center into the server cabinet (see our February 2008 story describing this technology). This design provides Google with UPS efficiency of 99.9 percent, compared to a top end of 92 to 95 percent for the most efficient facilities using a central UPS.
Malone addressed the details of how Google measures its PUE, picking up on recent best practices outlined by The Green Grid. Google measures its power use at the power distribution unit (PDU), since it has power tracking on the newer versions of its custom servers, but not all of them. It takes measurements on a continuous basis in four of its six PUE-rated data centers, and on a daily basis in the other two.

Malone said he expects Google’s PUE to head even lower. “A reasonable target can be 1.10,” said Malone, who said Google has designed its new data center in Belgium to operate without any chillers, which are traditionally the most energy-hungry pieces of data center gear. Google makes extensive use of free cooling, which uses outside air rather than air conditioners to keep the facility cool. But the Belgium site will be the first one to forego chillers entirely, a tactic that is only possibly in areas with a particular temperature range.

Urs Holzle, Google’s director of operations, said the growing interest in data center efficiency was a key factor in Google’s decision to share more about its operations. “We were really encouraged by the renewed industry interest,” said Holzle. “There wasn’t much benefit of preaching efficiency when not many people were interested.

“We’re proud of our results, but the point isn’t where we are, but where everyone can be,” he added. “We all need to improve our data centers. There’s a really a lot of low-hanging fruit.”

And then there’s Google’s custom servers and UPS design, which was on display. Holzle was asked whether he expected to see server vendors introduce similar designs. “I think the on-board UPS is a natural for many applications,” he said. “We have patents, including some that have been published. We’d definitely be willing to license them to vendors.”

We’ll have more updates later today from the afternoon sessions, and much more coverage in days to come.

quincy data center

You’re unlikely to ever see the inside of a Google data center. But not so for Microsoft, which recently allowed the BBC to film inside its data center in Quincy, Washington. The BBC’s Rory Cellan-Jones provides a brief video tour of the 470,000 square foot facility in central Washington state, which houses the equipment powering Microsoft’s new Windows Azure cloud developer platform.

data center efficiency

Data center efficiency is critical to business needs as computer requirements grow, density increases, and power and cooling demands climb. Intel is helping the industry address areas where data centers consume power including power conversion and distribution, cooling, and even lighting.

To do this, Intel is focusing on data center performance metrics and delivering instrumentation to improve energy efficiency at the processor, platform, and data center levels.

In collaboration with industry standards organizations such as the Standard Performance Evaluation Corporation (SPEC) and The Green Grid, Intel is helping to develop a more reliable metric that measures energy consumption and performance for all levels of utilization-idle to peak-that a server would experience in a typical day, week or month.

rackspace data center

We know that every second your network is down you're losing opportunities, revenue and the confidence of your users and visitors. So we decided to do something about it. We designed and built the Zero-Downtime Network to minimize downtime and we are so confident about its capabilities that we will give you money back if it goes down. And it works so well that we guarantee it.

cloud computing data center

Yesterday at Microsoft’s 10th annual Microsoft Management Summit 2009, Microsoft gave the world a glimpse of how the datacenter will be evolving from today, into tomorrow, and into the area of cloud-based computing. Microsoft’s key focus was around the virtualized environments of tomorrow’s datacenters, and how a plethora of mobile wireless devices will connect to the cloud.

Brad Anderson, general manager of the Management and Services Division at Microsoft, said regarding today’s datacenter, “IT is faced with a growing set of end-user demands and business-critical expectations around service availability [through] familiar tools that provide a complete view of service levels, applications, infrastructure and clients.” However, the manner in which this will be delivered to customers is changing.

facebook data center

If your users are uploading 40 million photos a day, what does your data center look like? A new video seeking to recuit engineers for Facebook provides a glimpse of one of the social network’s data center facilities, along with some facts about the social network’s amazing growth. The company’s data centers store more than 40 billion photos, and users upload 40 million new photos each day – about 2,000 photos every second. Not surprisingly, the racks are packed. The facility is using a raised-floor design, with no containment but generous spacing between racks.

data center cooling

The cooling infrastructure is a significant part of a data center. The complex connection of chillers, compressors and air handlers create the optimal computing environment, ensuring the longevity of the servers installed within and the vitality of the organization they support.Yet, this current ecosystem has come at a price. The EPA's oft-cited 2007 report predicted that data center energy consumption, if left unchecked, would reach 100 million kWh by 2011 with a corresponding energy bill of $7.4 billion. This conclusion, however, isn't strictly based on Moore's Law or the need for greater bandwidth. Their estimate envisions tomorrow's processing power will be addressed with yesterday's cooling strategies. The shortcomings of these designs, coupled with demand for more processing power, would require (10) new power plants to provide the juice for it all, according to the report. In light of this news, many industry insiders are turning a critical eye toward cooling, recognizing both the inefficiencies of current approaches and the improvements possible through new technologies. The information contained herein is designed to assist the data center professional who, while keeping uptime and redundancy inviolate, must also balance growing demand for computing power with pressure to reduce energy consumption.

data center servers

While research in TCP/IP processing has been under way for several decades, the increasing networking needs of server workloads and evolving server architectures point to the need to explore TCP/IP acceleration opportunities. Researchers at Intel Labs are experimenting with mechanisms thataddress system and memory stall time overheads. They are also studying the effects of interrupt and connection level affinity on TCP/IP processing performance.

Further, they have begun exploring mechanisms to support latency-critical TCP/IP usage models such as storage over IP and clustered systems. The goal is to identify the right level of hardware support for communication on future CMP processors and server platforms.

data center pue

Energy efficiency is rapidly becoming a key data center issue. The more energy efficient the data center, the lower the on-going operating costs for facilities’ users. In order to aid potential customers to compare the energy efficiency of a Digital Realty Trust data center to those of other firms that may be considered we have begun to publish the PUE number as a data center benchmark for each facility that we are building in 2008 going forward.

google search data

With Google Insights for Search, you can compare search volume patterns across specific regions, categories, time frames and properties. See examples of how you can use Google Insights for Search.

google data center

Google will soon be publishing videos of many of the sessions at its Google Data Center Efficiency Summit held Wednesday in Mountain View, Calif. In the meantime, here’s a sneak preview with outtakes from the video tour of a Google data center, which showcased the company’s use of shipping containers to store servers and storage. Each of these 40-foot data center containers can house up to 1,160 servers, and Google has been using them since it built this facility in late 2005. The summit was a Google event for industry engineers and executives to discuss innovations to improve data center efficiency.

google datacenter search

Search the SERPs (Search Engine Result Pages) in Google Datacenters There are many many different Google datacenters. Each of these has the potential to respond with different results for the same search query. Most times, seeing results that vary from datacenter to data center, means that Google is in the process of updating their index, visit Pagerank Checker

datacenter pagerank check

In the Search Engine Optimization world is Google PageRank one of the main indicators of your SEO Progress. This search engine optimization tool checks the PageRank on the mayor Google datacenters. Most times it indicates that Google is in the middle of a PageRank update when the results vary from datacenter to datacenter, visit Pagerank Checker

Google datacenter check

Check a website's Google PageRank on major Google datacenters instantly, visit Google datacenter check

google datacenter

Google’s data centers are the object of great fascination, and the intrigue about these facilities is only deepened by Google’s secrecy about its operations. We’ve written a lot about Google’s facilities, and thought it would be useful to summarize key information in a series of Frequently Asked Questions: The Google Data Center FAQ.



Why is Google so secretive about its data centers?
Google believes its data center operations give it a competitive advantage, and says as little as possible about these facilities. The company believes that details such as the size and power usage of its data centers could be valuable to competitors. To help maintain secrecy, Google typically seeks permits for its data center projects using Limited Liability Corporations (LLCs) that don’t mention Google, such as Lapis LLC (North Carolina) or Tetra LLC (Iowa).

How many data centers does Google have?
Nobody knows for sure, and the company isn’t saying. The conventional wisdom is that Google has dozens of data centers. We’re aware of at least 12 significant Google data center installations in the United States, with another three under construction. In Europe, Google is known to have equipment in at least five locations, with new data centers being built in two other venues.

Where are Google’s data centers located?
Google has disclosed the sites of four new facilities announced in 2007, but many of its older data center locations remain under wraps. Much of Google’s data center equipment is housed in the company’s own facilities, but it also continues to lease space in a number of third-party facilities. Much of its third-party data center space is focused around peering centers in major connectivity hubs. Here’s our best information about where Google is operating data centers, building new ones, or maintaining equipment for network peering. Facilities we believe to be major data centers are bold-faced.

Tuesday, November 10, 2009

google data center cooling

Worried about power costs in your data center? Feeling squeezed by the power company? Dreaming of modular designs and airflow at night? Well, relief is on the way–in four years or so.

This is the bright side–Gartner style. On Thursday, Gartner analyst Michael Bell outlined a few data center tips and factoids worth considering. Among the nuggets at the Gartner Symposium/ITxpo:

By 2008, half of the current data centers won't have the power or cooling capacity to deal with their equipment. Translation: You're toast.
In 2009, power and cooling will be your second biggest data center cost. Most of you are there already and for companies like Google power costs are the top expense.
But by 2011, technology–primarily better cooling strategies, more efficient chips, DC power, in-server cooling and real-time monitoring–will ride to the rescue at least to the point where you'll be able to sleep.
Blade servers are good (sort of). If you think blade servers racked and stacked are a data center fix you're only half right. The blade-a-thon results in denser data centers and more computing power. That's fine and dandy, but now you need more juice to cool things down. Eventually this works out–as technology rides to the rescue. Gartner recommends doing an energy audit to see what your blade servers really consume. Then you can at least make up for the power loss elsewhere (another reason for green IT practices). See chart below:

Since those costs stink, Gartner is advising that technology folks spend time designing their cooling system. Maybe the next boom market will be in air conditioning design And a few more items to ponder as these cooling issues get worked out.

Up your data center temperature from 70 degrees to 74 and humidity levels from 45 percent to 50 percent.
Measure your server energy efficiency. This is getting easier given that the EPA has given some guidance on the topic. Deeper measurements are hard to come by.
Grill your data center hosting company on energy efficiency. This is an important point that I'd bet few companies are doing. Customers should make energy efficiency a priority since your hosting company is only going to pass those costs along.

sun microsystems data center

Sun Microsystems has completed a new data center in Broomfield, Colo., built with efficiencies that the company says will save $1 million a year in electricity costs.

The data center features overhead cooling using Liebert XDs, airside economizing and flywheel uninterruptible power supplies (UPSes) from Active Power.

The project came about when Sun acquired StorageTek back in 2005, so it’s been in the works for a few years now. Both companies had data centers in Broomfield that actually sat on opposite sides of Route 36, a major road, and Sun decided that it would consolidate the two into one. It was able to condense 496,000 square feet of data center space at the old StorageTek campus into 126,000 square feet in the new location, a move that is saving 1 million kwH per month.

The move is also slicing down the amount of raised floor space from 165,000 square feet to just 700 square feet — enough to support a mainframe and old Sun E25K box for testing. The elimination of that much raised floor, including the construction needed to brace it to support such heavy IT equipment, is saving Sun $4 million, according to Mark Monroe, Sun’s director of sustainable computing for the Broomfield campus.

The overhead Liebert XD data center cooling units feature variable speed drive (VSD) fans that allow the supply of air to range from 8kw up to 30kw per rack. The Active Power flywheel UPSes eliminate the need to have a whole room dedicated to housing UPS batteries.

“Flywheels are usually 95% to 97% efficient,” Monroe said. “Battery systems are usually in the low 90s, high 80s.”

Finally, Sun is using a chemical-free method to treat its chilled water system that takes advantage of electromagnetics. The new method allows Sun to reuse the water in onsite irrigation systems and not have to flush out the water as often. It will save about 675,000 gallons of water and $25,000 per year.

In total, the company will be cutting its carbon dioxide consumption by 11,000 metric tons per year, largely because Broomfield gets so much of its power from coal-fired power plants.

san jose data center

When Fortune Data Centers was looking for its first development project in 2006, it anticipated the importance of power and cooling in new facilities. “What we saw was that everybody would be looking for old data centers and then want to add power and cooling,” said John Sheputis, the company’s founder and CEO. “We thought, ‘let’s go find a mission-critical building that already had the power and cooling and then bring the fiber to it.’ We decided that a retiring fab (semiconductor fabrication facility) would have the power and cooling we needed.”

Sheputis and his partners acquired a former Seagate fabrication facility in San Jose, Calif., and are busy transforming the site into a 140,000 square foot data center in the heart of Silicon Valley. Fortune Data Centers plans to complete the 80,000 square foot first phase of the project in October.

It turns out that Sheputis didn’t have to look far to find fiber. The facility is sandwiched between data centers for NTT and Verizon. “The buildings property lines are wrapped in fiber,” said Sheputis.

Fortune Data Centers is led by executives with experience in both IT and real estate. Sheputis founded and managed several Silicon Valley ventures specializing in infrastructure management, including Totality (now a unit of Verizon Business). Tim Guarnieri, the VP and GM of Data Center Operations, is a veteran of the Palo Alto Internet Exchange (PAIX) and Switch and Data (SDXC).

The real estate side of the equation is represented by VP of acquisitions Will Fleming (previously Regional Acquisitions Director for Virtu Investments) and CFO Bruce MacLean, who was President of Embarcadero Advisors and a veteran of Trammel Crow.

The San Jose facility is expected to achieve LEED Gold certification, and will be supported by 16 megawatts of power from PG&E. Sheputis says power will be priced at roughly 9.5 cents per kW hour, which he called PG&E’s “most competitive industrial tariff in Northern California.” The building will be carrier-neutral, with connectivity from AT&T (T), Verizon (VRZN), Level 3 (LVLT) and AboveNet.

Sheputis said the existing power at the site was critical to the project. “There’s no way we would get financing without a building with construction advantages,” he said. “The capital markets are very constrained right now. It’s a huge issue. If you are an independent developer, it’s hard to find financing.”

Sheputis says the model is one that can work in other markets. “If we’re successful, we may remake the market for retiring fabs,” he said.

google container data center

Four years after the first reports of server-packed shipping containers lurking in parking garages, Google today confirmed its use of data center containers and provided a group of industry engineers with an overview of how they were implemented in the company’s first data center project in the fall of 2005. “It’s certainly more fun talking about it than keeping it a secret,” said Google’s Jimmy Clidaras, who gave a presentation on the containers at the first Google Data center Efficiency Summit today in Mountain View, Calif.

The Google facility features a “container hanger” filled with 45 containers, with some housed on a second-story balcony. Each shipping container can hold up to 1,160 servers, and uses 250 kilowatts of power, giving the container a power density of more than 780 watts per square foot. Google’s design allows the containers to operate at a temperature of 81 degrees in the cold aisle. Those specs are seen in some advanced designs today, but were rare indeed in 2005 when the facility was built.

Google’s design focused on “power above, water below,” according to Clidaras, and the racks are actually suspended from the ceiling of the container. The below-floor cooling is pumped into the hot aisle through a raised floor, passes through the racks and is returned via a plenum behind the racks. The cooling fans are variable speed and tightly managed, allowing the fans to run at the lowest speed required to cool the rack at that moment.
“Water was a big concern,” said Urs Holzle, who heads Google’s data center operations. “You never know how well these couplings (on the water lines) work in real life. It turns out they work pretty well. At the time, there was nothing to go on.”

Google was awarded a patent on a portable data centerin a shipping container in October 2008, confirming a 2005 report from PBS columnist Robert Cringley that the company was building prototypes of container-based data centers in a garage in Mountain View. Containers also featured prominently in Google’s patent filing for a floating data center that generates its own electricity using wave energy.

Holzle said today that Google opted for containers from the start, beginning its prototype work in 2003. At the time, Google housed all of its servers in third-party data centers. “Once we saw that the commercial data center market was going to dry up, it was a natural step to ask whether we should build one,” said Holzle.

The data center facility, referred to as Data Center A, spans 75,000 square feet and has a power capacity of 10 megawatts. The facility has a Power Usage Effectiveness (PUE) of 1.25, and when the container load is measured across the entire hangar floor space, it equates to a density of 133 watts per square foot. Google didn’t identify the facility’s location, but the timeline suggests that it’s likely one of the facilites at Google’s three-building data center complex in The Dalles, Oregon.

Data center containers have been used for years by the U.S. military. The first commercial product, Sun’s Project Blackbox, was announced in 2006. We noted at the time that the Blackbox “extends the boundaries of the data center universe, and gives additional options to managers of fast-growing enterprises.”

It turns out that containers have developed as key weapons in the data center arms race between Google and Microsoft, which last year announced its shift to a container model. Microsoft has yet to complete its first container data center in Chicago.

Google Data Center Secrets Now Showing On YouTube

Not long ago, Google data centers were a closely guarded secret. The company's technical innovations were regarded as a competitive advantage.

Atigeo is beta testing a service that will let you make your social profile both portable and more meaningful by giving the profile owner control over its contents and outcomes.
But on April 1, in the spirit of a promise made in 2006 to be more transparent, Google revealed details about its custom servers and its data centers.
Google opened its kimono before more than 100 industry leaders and journalists at its Mountain View, Calif., headquarters and now has posted a video tour of one of its data centers and videos of its presentation on YouTube.

"We disclosed for the first time details about the design of our ultraefficient data centers," Google engineer Jimmy Clidaras said in a blog post Thursday. "We also provided a first-ever video tour of a Google container data center as well as a water treatment facility. We detailed how we measure data center efficiency and discussed how we reduced our facility energy use by up to 85%. The engineers who developed our efficient battery backup solution even brought an actual Google server to the event."

At the Google Data Center Efficiency Summit, Google said that its Power Usage Efficiency (PUE) -- the ratio of total data center power to power used by facility IT equipment -- had improved from an average of 1.21 in the third quarter of 2008 to 1.16 during the fourth quarter of 2008.

A PUE of 1.0 represents a data center with no power loss to cooling or energy distribution. The typical corporate data center, based on 2006 statistics, has a PUE of 2.0 or more.

At 1.16, Google's PUE exceeds the EPA's most optimistic target for data center efficiency in 2011, which is a PUE of 1.2.

Google showed off a data center that it has been operating since 2005. With a 10-megawatt equipment load, it contains 45 shipping containers, each of which holds 1,160 servers. It operates at a PUE of 1.25.

Power efficiency increasingly affects revenue for businesses that depend on data centers. In a presentation at the Google event, James Hamilton, VP and distinguished engineer at Amazon Web Services, explained that while servers look like the major cost of data centers, computing costs are trending downward while power and power distribution costs are flat or trending upward. "Power and things functionally related to power will be the most important thing, probably next year," he said.

One of the more surprising innovations of Google's server design -- seen here in a CNET photograph -- appears to be rather mundane: The company's custom-designed server hardware includes a 12-volt battery that functions as an uninterruptible power supply. This obviates the need for a central data center UPS, which turns out to be less reliable than on-board batteries.

"This design provides Google with UPS efficiency of 99.9%, compared to a top end of 92% to 95% for the most efficient facilities using a central UPS," explains Rich Miller in a post on Data Center Knowledge.

youtube google datacenter

Judging by the heavy interest in last week's look at Google's previously secret server and data center design, I thought it would be useful to note that Google has now put much of the information on YouTube.

The disclosures came at a Google-sponsored conference on data center efficiency, which boils down to getting the most computing done with the least electrical power. The idea is core to Google's operations: the company operates at tremendous scale, tries to minimize its harm to the environment, and has a strong financial incentive to keep its costs low.
There are a number of videos from the conference online, starting with the tour of a Google data center. Google's servers, which the company itself designs, are packed 1,160 at a time into shipping containers that form a basic, modular unit of computing.
Also worth a look is the tour of Google's water treatment facility. Google uses water to cool the hot air the servers produce. Most Google data centers use chillers to cool the water by refrigeration, but one data center in Belgium is experimenting with the use only of the less power-hungry evaporative cooling.
Finally, Google published the proceedings of the conference itself--part one, part two, and part three.

google data center council bluffs iowa

Google is very happy to be located in Council Bluffs, IA.

We announced our plans to build a data center here in early 2007, and today we are a fully operational site that has already begun benefitting our users around the world. We have had an excellent experience in Council Bluffs as we've built out this $600 million investment, and we look forward to being a part of the Iowa community for many years to come.

We're eager to share more information with you about what we're doing in the area. On this site, you'll find information about:

what exactly a data center is
the kinds of jobs that are available
what Google does
how to contact us
our community outreach program
We hope that you use this site to familiarize yourself with the Council Bluffs data center, and feel free to share this site with anyone who may be interested.

We appreciate your help and support, and feel privileged to be part of the Council Bluffs community.

Google Will Delay Oklahoma Data Center

Google will delay construction of its data center in Pryor, Oklahoma, the company confirmed today. The $600 million facility was scheduled to be completed in early 2009, but instead will go online sometime in 2010.

The administrator of the Mid-American Industrial Park in Pryor, where Google has purchased 800 acres of land, told the Tulsa World the slowing economy was a factor in Google’s decision to push back its construction timetable. But the company said it was staggering the deployment of new data center space after bringing several projects online in recent months.

“Google’s data centers are crucial to providing fast, reliable services for our users and we’ve invested heavily in capacity to ensure we can meet existing as well as future demand,” a Google spokesperson said. “This means there is no need to make all our data centers operational from day one. We anticipate that the Pryor Creek facility will come into use within the next 12 to 18 months. Google remains committed to and excited about operating this facility in Mayes County.”

Google announced the Oklahoma data center project in May, 2007, when it purchased 800 acres of land in Pryor for a massive facility that would employ 100 workers with an average salary of $48,000. The Pryor project was the third of four data center construction projects Google announced in the first half of 2007. The search company has completed the first data center in its project in Lenoir, North Carolina and is preparing to begin production in its facility in Goose Creek, South Carolina.

Google Eyes Austria for New Data Center

Google today is contemplating building a data center in Kronstorf, Austria, where it has purchased 185 acres of farmland for the project. The project has been in the works since May, when news of Google’s site location scouting trips in Austria was published on Twitter by Kronstorf residents.

UPDATE: Initial reports from AFP said Google had confirmed that it would build a data center at the Kronstorf site, the company says its process has not yet reached that stage.

“I’m pleased to confirm that we are looking at the potential opportunities offered to us by this site in Kronstorf, with regards to the possibility of building a data center facility,” a Google spokesperson told Data Center Knowledge. “This particular site has a number of features to recommend it – including a good environment in which to do business, excellent economic development team, strong infrastructure and the future possibility of attracting and retaining an excellent workforce.

“We have no immediate plans to start building on the site, as we will next proceed with some technical studies and design work. We are just at the stage of evaluating what future opportunities it might offer us, and we will keep you updated when our plans are firmed up.”

Kronstorf is a town of 3,000 near the city of Linz. The land purchased by Google is near several hydro-electric power plants on the river Enns, which would satisfy Google’s requirement for the use of renewable energy sources in its facilities. Kronstorf also is close to major universities in Linz, Steyr and Hagenberg, which could supply a trained IT workforce.

Google Sorts 1 Petabyte of Data in 6 Hours

Google has rewritten the record book and perhaps extended the benchmark for sorting massive volumes of data. The company said Friday that it had sorted 1 terabyte of data in just 68 seconds, eclipsing the previous mark of 209 seconds established in July by Yahoo. Google’s effort included 1,000 computers using MapReduce, while Yahoo’s effort featured a 910-node Hadoop cluster.

Then, just for giggles, they expanded the challenge: “Sometimes you need to sort more than a terabyte, so we were curious to find out what happens when you sort more and gave one petabyte (PB) a try,” wrote Grzegorz Czajkowski of the Google Systems Infrastructure Team. “It took six hours and two minutes to sort 1PB (10 trillion 100-byte records) on 4,000 computers. We’re not aware of any other sorting experiment at this scale and are obviously very excited to be able to process so much data so quickly.”

A Closer Look at Google’s European Data Centers

Google’s purchase of land in Austria for a possible data center highlights the global nature of the search giant’s infrastructure. Google’s existing European footprint includes several data centers in the Netherlands and one in Belgium, as well as peering centers in major European bandwidth hubs.

Erwin Boogert recently posted new photos of Google’s facility near Groningen in the Netherlands. Erwin originally shot pictures of the facility in 2004, but revisited in late October for a second look. Erwin is an IT journalist who has also put together a Google Maps mashup with more information about Google’s operations in the Netherlands and Belgium.

Google Slows N.C. Build, Foregoing State GrantGoogle has told the state of North Carolina that it won’t meet the job creation criteria for a $4.7 mill

Report: Google Building ‘Fast Lane’ With ISPs

Google is seeking to place content distribution servers within the networks of major ISPs, creating a “fast lane” for its content, according to the Wall Street Journal. The arrangement could allow Google to use less bandwidth to serve its content, but the Journal questioned whether the move would be at odds with Google’s public support for Net neutrality.

Google has responded with a blog post asserting that Google’s edge caching is consistent with network neutrality and current practices of content network providers like Akamai and Limelight.

Google Buys More Land at Lenoir Data Center

Google may be slowing the pace of its data center construction in the short-term, but continues to prepare for major data center expansion over the long term. Google said today that it has paid $3.13 million to buy an additional 60 acres of land in Lenoir, North Carolina, near where the company has built a data center on a 220-acre property. The announcement follows news that Google has slowed the pace of construction at its Lenoir project and told the state of North Carolina that it won’t meet the job creation criteria for a $4.7 million state grant.

“This is a strategic location for our company,” Tom Jacobik, operations manager of the Lenoir facility, told the Asheville Citizen-Times. “We look forward to a long and active presence here.” Google has finished one data center building at Lenoir, which it opened in May. It plans to eventually complete a second data center building at the site.

Rumor: Google Building Its Own Router?

There’s a rumor making the rounds today that Google is building its own router. The report appeared first at the SD Times blog, was picked up at Bnet and is now being discussed on Slashdot. The gist of the rumor is that Juniper loses a big client and Cisco should be worried. Most of Google’s custom hardware development has focused on optimizing its in-house systems for peak efficiency, rather than developing commercial products. Google (GOOG) is declining comment.

Google Throttles Back on Data Center Spending

Google (GOOG) spent $368 million on its infrastructure in the fourth quarter of 2008 as it scaled back its ambitious data center building boom, idling a $600 million project. The fourth quarter capital expenditure (CapEx) total, which was included in today’s earnings release, was less than half the $842 million Google spent on its data centers in the first quarter of 2008.

Will Project Delays Kill Data Center Incentives?

In the big-picture analysis, the decisions by Google and Microsoft to delay major data center projects seem prudent. The projects cost $550 million to $600 million apiece, and neither company appears likely to run out of data center capacity anytime soon.

But state and local officials in Oklahoma and Iowa are focused on a smaller picture. The postponements of Google’s project in Pryor, Oklahoma and Microsoft’s facility in West Des Moines, Iowa dash any hopes that huge data center projects will create jobs in the midst of the economic crisis. Both states offered significant tax incentive packages to attract these companies, with expectations of a payoff in high-tech jobs.

Iowa Gov. Chet Culver “is obviously disappointed by the news that Microsoft has decided to delay their plans for a new data center in West Des Moines,” Culver’s office said in a statement Friday. “This is just one more sign that no one is immune from the economic recession gripping our nation. The Governor remains hopeful that conditions will improve and Microsoft will begin construction on their new facility soon.”

A Glimpse Inside Google’s Lenoir Data Center

Google has cracked opened its data center doors – just slightly – and allowed media to have a look inside one its facilties. The company is renowned for the secrecy surrounding its data center operations, which it considers to be a competitive advantage in its battle with business rivals like Microsoft and Yahoo. Google recently allowed a reporter and photographer from the Charlotte Observer to visit the office area of its data center in Lenoir, North Carolina, about 70 miles northwest of Charlotte.

Google Plans Data Center in Finland

Google has bought a former paper mill in southeastern Finland and is likely to convert the facility into a data center, the company said today. Google will pay $51.6 million for the site, and expects to close on the purchase by the end of the first quarter.

Google (GOOG) has been slowing its data center development in the United States, recently idling a scheduled project in Oklahoma. But the company continues to invest in land deals in Europe in preparation for future data center expansion. In November Google purchased 185 acres of farmland in Kronstorf, Austria for future development as a data center.

Gmail Outage Focused in European Network

Google says that yesterday morning’s Gmail outage was caused by disruptions in its European data centers. The incident was triggered by unexpected issues with a software update, resulting in more than two hours of downtime for the widely used webmail service. Here’s an explanation:

"This morning, there was a routine maintenance event in one of our European data centers. This typically causes no disruption because accounts are simply served out of another data center. Unexpected side effects of some new code that tries to keep data geographically close to its owner caused another data center in Europe to become overloaded, and that caused cascading problems from one data center to another. It took us about an hour to get it all back under control."

Google is offering a 15-day service credit to Google Apps customers who pay to use Gmail with their domains. As background, here’s some additional information on Google’s European data centers. The company has purchased land in Austria and Finland to expand its data center footprint in Europe, but has made no announcements yet about whether or when it will build in these locations.

Google: Stalled Data Centers Will Be Built

Google (GOOG) isn’t abandoning the data center projects where it has slowed or halted construction due to the slowing economy, the company said this week. In a presentation at the Goldman Sachs Technology and Internet Conference in San Francisco, executives said Google intends to eventually complete a planned facility in Oklahoma, and that recent land purchases reflect long-term planning for an even larger data center network.

In October Google said it will delay construction of a $600 million data center campus in Pryor, Oklahoma that was originally scheduled to be completed in early 2009. The company also reportedly has halted construction work on a second data center building in Lenoir, North Carolina, where Google has already built and commissioned one data center.

“We will build those data centers,” said Alan Eustance, Google’s senior vice president for engineering. “There’s no doubt that over the life of the company we will need that computation. None of those sites have been shelved.”

“All the demand coming our way is relentless,” added chief financial office Patrick Pichette. “It’s not a question of if, it’s a question of when.”

Google Confirms Data Center in Finland

It’s official: Google will build a major data center at a former paper mill in Hamina, Finland, the company said today. Google bought the former Stora Enso newsprint plant for $51 million last month, and said it was “likely” to use the facility for a data center. Today Google posted details about the Hamina project on the data center section of its web site.

Google said it expected to invest 200 million Euros (about $252 million) in the project. That’s a smaller investment than the $600 million the company has announced for U.S. projects in North Carolina, South Carolina and Iowa. It wasn’t immediately clear whether the smaller number reflects a change in scope for Google’s capital expenditures on data centers or was related to the specifics of the Hamina property.

“When fully developed, this facility will be a critical part of our infrastructure for many years to come,” Google said. “Limited testing of the facility should be underway in 2010 and the center should be fully operational later that year.”

How Google Routes Around Outages

Making changes to Google’s search infrastructure is akin to “changing the tires on a car while you’re going at 60 down the freeway,” according Urs Holzle, who oversees the company’s massive data center operations. Google updates its software and systems on an ongoing basis, usually without incident. But not always. On Feb. 24 a bug in the software that manages the location of Google’s data triggered an outage in Gmail, the widely-used webmail component of Google Apps.

Just a few days earlier, Google’s services remained online during a power outage at a third-party data center near Atlanta where Google hosts some of its many servers. Google doesn’t discuss operations of specific data centers. But Holzle, the company’s Senior Vice President of Operations and a Google Fellow, provided an overview of how Google has engineered its system to manage hardware failures and software bugs. Here’s our Q-and-A:

Data Center Knowledge: Google has many data centers and distributed operations. How do Google’s systems detect problems in a specific data center or portion of its network?

Urs Holzle: We have a number of best practices that we suggest to teams for detecting outages. One way is cross monitoring between different instances. Similarly, black-box monitoring can determine if the site is down, while white-box monitoring can help diagnose smaller problems (e.g. a 2-4% loss over several hours). Of course, it’s also important to learn from your mistakes, and after an outage we always run a full postmortem to determine if existing monitoring was able to catch it, and if not, figure out how to catch it next time.

DCK: Is there a central Google network operations center (NOC) that tracks events and coordinates a response?

Urs Holzle: No, we use a distributed model with engineers in multiple time zones. Our various infrastructure teams serve as “problem coordinators” during outages, but this is slightly different than a traditional NOC, as the point of contact may vary based on the nature of the outage. On-call engineers are empowered to pull in additional resources as needed. We also have numerous automated monitoring systems built by various teams for their products, that directly alerts an on-call engineer if anomalous issues are detected.

Efficient UPS Aids Google’s Extreme PUE

Google continues to improve its energy efficiency, and is telling the industry how it’s doing it. After years of secrecy surrounding its data center operations, Google is disclosing many of its innovations today at the first Google Data Center Efficiency Summit in Mountain View, Calif.

In a morning presentation, Google engineers addressed its Power Usage Effectiveness (PUE) ratings, which have generated discussion within the industry since Google’s disclosed in October that its six company-built data centers had an average PUE of 1.21. That benchmark improved to 1.16 in the fourth quarter, and hit 1.15 percent in the first quarter of 2009, according to Google’s Chris Malone. The most efficient individual data center (described as “Data Center E”) has a PUE of 1.12.

“These are standard air-cooled servers, and best practices is what enabled these results,” said Malone. “What’s encouraging is that we’ve achieved this through the application of practices that are available to most data centers. There’s great potential for all data centers to improve their PUE.”

But there’s also some serious Google magic at work. One of the keys to Google’s extraordinary efficiency is its use of a custom server with a power supply that integrates a battery, allowing it to function as an uninterruptible power supply (UPS). The design shifts the UPS and battery backup functions from the data center into the server cabinet (see our February 2008 story describing this technology). This design provides Google with UPS efficiency of 99.9 percent, compared to a top end of 92 to 95 percent for the most efficient facilities using a central UPS.

Google’s Custom Web Server, Revealed

It’s long been known that Google builds its own web servers, enabling it to design the servers for peak performance and energy efficiency. At today’s Google Data Center Energy Summit, the company put one of its custom servers on display. Here’s a brief video of the server, which features a power supply that integrates a battery, allowing it to function as an uninterruptible power supply (UPS). The design shifts the UPS and battery backup functions from the data center into the server cabinet.

Google Unveils Its Container Data Center

Four years after the first reports of server-packed shipping containers lurking in parking garages, Google today confirmed its use of data center containers and provided a group of industry engineers with an overview of how they were implemented in the company’s first data center project in the fall of 2005. “It’s certainly more fun talking about it than keeping it a secret,” said Google’s Jimmy Clidaras, who gave a presentation on the containers at the first Google Data center Efficiency Summit today in Mountain View, Calif.

The Google facility features a “container hanger” filled with 45 containers, with some housed on a second-story balcony. Each shipping container can hold up to 1,160 servers, and uses 250 kilowatts of power, giving the container a power density of more than 780 watts per square foot. Google’s design allows the containers to operate at a temperature of 81 degrees in the cold aisle. Those specs are seen in some advanced designs today, but were rare indeed in 2005 when the facility was built.

Google’s design focused on “power above, water below,” according to Clidaras, and the racks are actually suspended from the ceiling of the container. The below-floor cooling is pumped into the hot aisle through a raised floor, passes through the racks and is returned via a plenum behind the racks. The cooling fans are variable speed and tightly managed, allowing the fans to run at the lowest speed required to cool the rack at that moment.

Inside A Google Data Center

Google will soon be publishing videos of many of the sessions at its Google Data Center Efficiency Summit held Wednesday in Mountain View, Calif. In the meantime, here’s a sneak preview with outtakes from the video tour of a Google data center, which showcased the company’s use of shipping containers to store servers and storage. Each of these 40-foot data center containers can house up to 1,160 servers, and Google has been using them since it built this facility in late 2005. The summit was a Google event for industry engineers and executives to discuss innovations to improve data center efficiency. this video runs about 5 minutes, 45 seconds.

Google’s Data Center Water Treatment Plant

Earlier today we noted the industry’s efforts to reduce the use of water by large cloud computing data centers. Google says two of its newer data centers are now “water self-sufficient, including the company’s new data center in Belgium, which is located next to an industrial canal. Google has built a 20,000 square foot water treatment plant to prepare the canal water for use in its nearby data center. Here’s a video with more information about the water treatment plant, which was first presented at the Google Efficient Data Centers Summit last week in Mountain View.

Microsoft, Google and Data Center Glasnost

One of the best-attended Tuesday sessions at The Uptime Institute’s Symposium 2009 in New York was a presentation by Google’s Chris Malone. As has been noted elsewhere, Malone’s talk summarized much of the information that Google disclosed April 1 at its Data Center Efficiency Summit. But there was a noteworthy moment during the question and answer period when Daniel Costello approached the mike.

Costello is one of the architects of Microsoft’s CBlox data center container strategy. Keep in mind that Microsoft has yet to finish its first containerized facility in Chicago, and Costello had just watched a video documenting Google’s completion of a data center container farm in Sept. 2005, nearly three years before Microsoft announced its project. Would there be tension, or perhaps a debate about the dueling designs?

Google CapEx Continues To Trend Lower

Google (GOOG) spent $263 million on its infrastructure in the first three months of 2008, the lowest quarterly total since it began operating its own data centers, as it continued in cash conservation mode. The first quarter capital expenditure (CapEx) total, which was included in today’s earnings release, was less than a third of the $842 million Google spent on its data centers in the same quarter in 2008.

Google: No Plans Yet for Second S.C. Site

This week Google invited reporters to tour the office area of its new data center in Goose Creek, South Carolina, following up on a similar open house at its facility in Lenoir, North Carolina. The Charleston Post and Courier notes that several employees bring their dogs to the office and hold ping-pong tournaments. “We work really hard, but we play hard too,” said Bill League, data center facilities manager.

While the event focused on community relations, there was one nugget of information: Google told the paper it has no current development plans for its 466-acre site in Blythewood, South Carolina. “We regularly review our resources and customer needs, and will keep the community informed if and when new plans develop,” Google said in a statement. “We expect to continue working with local officials on infrastructure and other aspects of the land acquisition.”

Google Gets Patent for Data Center Barges

The U.S. Patent Office has awarded Google a patent for its proposal for a floating data center that uses the ocean to provide power and cooling. Google’s patent application was filed in Feb. 2007, published in October 2008 and approved on Tuesday (and quickly noted by SEO by the Sea).

The patent application describes floating data centers that would be located 3 to 7 miles from shore, in 50 to 70 meters of water. If perfected, this approach could be used to build 40 megawatt data centers that don’t require real estate or property taxes.

The Google design incorporates wave energy machines (similar to Pelamis Wave Energy Converter units) which use the motion of ocean surface waves to create electricity and can be combined to form “wave farms.” The patent documents describe a cooling system based on sea-powered pumps and seawater-to-freshwater heat exchangers.

Rolling Outage for Google

Many users are experiencing trouble reaching Google today in a rolling outage that is affecting some regions more than others. The troubles were first seen at Google News, which came back online after an outage this morning, apparently to add video links to news searches.

Meanwhile, there are widespread reports on Twitter of trouble reaching other Google services, and even including the home page. The Google Apps status page is acknowledging a “service disruption” for Gmail and says a problem with Google Calendar has been resolved.

UPDATE: Urs Holzle, who oversees the company’s data center operations, has posted an explanation on the official Google blog. ”An error in one of our systems caused us to direct some of our web traffic through Asia, which created a traffic jam,” Holzle wrote. “As a result, about 14% of our users experienced slow services or even interruptions. We’ve been working hard to make our services ultrafast and ‘always on,’ so it’s especially embarrassing when a glitch like this one happens. We’re very sorry that it happened, and you can be sure that we’ll be working even harder to make sure that a similar problem won’t happen again.”

Google Traffic Shifted to NTT During Outage

What happened during Thursday’s performance problems for Google? The company said that “an error in one of our systems caused us to direct some of our web traffic through Asia, which created a traffic jam.” Renesys, which tracks Internet routing, has additional details in a blog post today.

Renesys says traffic was shifted to NTT, whose network received an influx of traffic bound for Google that would normally be routed through Level 3 and/or AT&T. At one point NTT’s network was handling 85 to 90 percent of the traffic bound for Google, according to Renesys. Check out Martin Brown’s analysis for more.

NTT America said that traffic flow may have shifted, but the problems were due entirely to issues at Google. ”NTT’s network was not the cause of Google’s performance problems,” said a spokesperson for NTT America, which maintains the company’s network. “No traffic jam occurred at NTT.”

Google on ‘The Data Center as a Computer’

We’ve written often about Google’s data center operations, which have include innovations such as data center containers and custom servers featuring on-board UPS batteries. Google’s approach to design and innovation are shaped by a vision of the data center that extends beyond traditional thinking. Two of the company’s data center thought leaders, Luiz Andre Barroso and Urs Holzle, have published The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines (PDF), a paper that summarizes the company’s big-picture approach to data center infrastructure.

Google Opens Council Bluffs Data Center

About 650 people attended a ceremony yesterday to mark the “launch” of Google’s new data center in Council Bluffs, Iowa. The $600 million facility is seen as a key economic development win for Iowa, and will eventually result in 200 full-time jobs paying $50,000 a year. “Google has opened up the door to opportunities for us – and Iowa – that we didn’t have in the past,” Council Bluffs Mayor Tom Hanafan said Tuesday, when state and local leaders, workers and about 100 local residents gathered at the new facility. “All of a sudden, companies are looking at Iowa a little differently. Google has put us on the map.”

Google is still hiring workers in Council Bluffs. but local officials noted that the company has room to expand. Google has purchased an additional 1,000 acres of land about four miles from the first phase of its data center project. The company was considering an adjacent 130-acre piece of land for the second phase, but may eventually expand at the larger parcel instead. The purchase doesn’t necessarily mean that Google is expanding the scope of its project, but gives it the space to build additional data centers if needed.

Vint Cerf at the Google Internet Summit

In early May Google hosted the Google Internet Summit 2009 at its Mountain View, Calif. campus. The event brought together thought leaders in Internet infrastructure, with the goal of gathering “a wide range of knowledge to inform Google’s future plans.” This video presents the introductory remarks from Vint Cerf, Google’s chief Internet evangelist and the co-designer of the TCP/IP network protocol. ”The Internet is at an important flex point in its history,” Cerf says. ”Scaling, in many different ways, is still an important issue. Cloud computing is adding another dimension to the way the Internet is being used.” This video runs 9 minutes, but the substance of Cerf’s remarks commece at the 4 minute mark after some opening greetings and housekeeping.

Voldemort Industries

Google is serious about its data centers. But it’s also determined not to take itself too seriously, as evidenced by an anecdote in Oregon Business News, which examines the impact of economy in a Google data center on the economy in The Dalles, Oregon. “As you pull up to the riverfront campus, you’ll spot a Voldemort Industries sign, a self-effacing reference to the Harry Potter character known as ‘He-Who-Must-Not-Be-Named,’” writes The Oregonian.

Okay, perhaps it’s not perfectly aligned with the whole “don’t be evil” motto. The sign is a reference to the use of a code name (Project 02) by local officials during the planning process for the Google data center in The Dalles. The whole cloak-and-dagger business apparently didn’t sit well with some in the community.

“Google officials say they learned from the backlash, and make a point to be transparent when they open data centers,” the story notes. “They have also gotten involved in The Dalles. Workers volunteer at cleanups or Habitat for Humanity; a garden at the edge of the property is public; grants go to community groups. Last fall, Google hosted an open house at its cafeteria and visitor center.”

The Billion Dollar HTML Tag

Can a single HTML tag really make a difference on a corporation’s financial results? It can at Google, according to Marissa Mayer, who says web page loading speed translates directly to the bottom line.

“It’s clear that latency really does matter to users,” said Mayer, the VP of Search and User Experience at Google and today’s keynote speaker at the O’Reilly Velocity Conference. Google found that delays of fractions of a second consistently caused users to search less. As a result, its engineers consistently refine page code to capture split-second improvements in load time.

This phenomenon is best illustrated by a single design tweak to the Google search results page in 2000 that Mayer calls “The Billion Dollar HTML Tag.” Google founders Sergey Brin and Larry Page asked Mayer to assess the impact of adding a column of text ads in the right-hand column of the results page. Could this design, which at the time required an HTML table, be implemented without the slower page load time often associated with tables?

Mayer consulted the W3C HTML specs and found a tag (the “align=right” table attribute) that would allow the right-hand table to load before the search results, adding a revenue stream that has been critical to Google’s financial success.

Managing Megasites: ‘An Insane Amount of Will’

One runs a popular service on just 350 servers, while another likely has more than a million servers. The common denominator: major traffic. Executives from six of the web’s most popular properties – Google, Microsoft, Yahoo, Facebook, MySpace and LinkedIn – shared the stage at Structure 09 yesterday to discuss their infrastructure and innovations.

Managing a megasite requires plenty of hardware. But that’s not the secret sauce, according to Vijay Gill, the Senior Manager of Engineering and Architecture at Google (GOOG). “The key is not the data centers,” said Gill. “Those are just atoms. Any idiot can build atoms and have this vast infrastructure. How you optimize it – those are the hard parts. It takes an insane amount of will.”

The challenges faced by the six sites varied. “I’m taking a minimalist approach,” said Lloyd Taylor, the VP of Technical Operations for the LinkedIn social network. “How little infrastructure can we use to run this? The whole (LinkedIn) site runs on about 350 servers.” That’s due largely to the fact that much of content served by LinkedIn consists of profiles and discussion groups are heavy on text. “We’re not a media intensive site,” said Taylor.

Was Google Busting on Bing?

Last week we wrote about a panel at the Structure 09 conference featuring technologists from the largest Internet sites, including Google’s Vijay Gill and Najam Ahmad from Microsoft. The Register also covered this panel in a story titled Google mocks Bing and the stuff behind it, noting that Gill referenced several data points with the observation that ”If you Bing for it, you can find it.”

Gill, the Senior Manager of Engineering and Architecture at Google, has blogged a response titled Google Does Not Mock Bing.

“I wasn’t mocking Bing when I said ‘Bing for it, you can find it.’” Gill writes. “I meant that seriously, in the spirit of giving props to a competitor, and a good one at that. Najam and I have been friends since before Google had a business plan, and I have the greatest respect for him and for Microsoft as a company.” Gill goes on to compare the different infrastructure approaches for the Google and Bing search products.

Google App Engine Hit By Outage

It’s been a rough week for uptime. Google App Engine has been struggling with performance problems for hours, and appears to be down. The problems began at 6:30 a,m, Pacific time, when App Engine began experiencing high latency and error rates. “All applications accessing the Datastore are affected,” Google said in a notice to developers. Shortly afterward the service went into “unplanned maintenance mode” and began operating as read-only, meaning developers couldn’t update their apps. “Our engineering teams are looking into the root cause of the problem,” Google said.

The App Engine Status Page is currently unavailable. The App Engine team is providing offsite updates via Twitter. UPDATE: At 3:15 Eastern, the status page is back: “Datastore read access has been reenabled and the team expects Datastore write access will be reenabled shortly.”

Google Launches Chrome Operating System

Google confirmed late Tuesday that it is launching a PC operating system based on its Chrome web browser. “Google Chrome OS is an open source, lightweight operating system that will initially be targeted at netbooks,” Google’s Sundar Pichai wrote on the Google blog. “Later this year we will open-source its code, and netbooks running Google Chrome OS will be available for consumers in the second half of 2010.”

“Google Chrome OS will run on both x86 as well as ARM chips and we are working with multiple OEMs to bring a number of netbooks to market next year,” Pichai added. “The software architecture is simple — Google Chrome running within a new windowing system on top of a Linux kernel. For application developers, the web is the platform. All web-based applications will automatically work and new applications can be written using your favorite web technologies.”

The New York Times has early coverage in a story for Wednesday’s paper, which aniticpated the browser being unveiled in a Google blog post Wednesday afternoon (makes me wonder if Google published early once the Times story was available online).

Google’s Chiller-less Data Center

Google (GOOG) has begun operating a data center in Belgium that has no chillers to support its cooling systems, a strategy that will improve its energy efficiency while making local weather forecasting a larger factor in its data center management.

Chillers, which are used to refrigerate water, are widely used in data center cooling systems but require a large amount of electricity to operate. With the growing focus on power costs, many data centers are reducing their reliance on chillers to improve the energy efficiency of their facilities.

This has boosted adoption of “free cooling,” the use of fresh air from outside the data center to support the cooling systems. This approach allows data centers to use outside air when the temperature is cool, while falling back on chillers on warmer days.

Google has taken the strategy to the next level. Rather than using chillers part-time, the company has eliminated them entirely in its data center near Saint-Ghislain, Belgium, which began operating in late 2008 and also features an on-site water purification facility that allows it to use water from a nearby industrial canal rather than a municipal water utility.

Year-Round Free Cooling
The climate in Belgium will support free cooling almost year-round, according to Google engineers, with temperatures rising above the acceptable range for free cooling about seven days per year on average. The average temperature in Brussels during summer reaches 66 to 71 degrees, while Google maintains its data centers at temperatures above 80 degrees.

So what happens if the weather gets hot? On those days, Google says it will turn off equipment as needed in Belgium and shift computing load to other data centers. This approach is made possible by the scope of the company’s global network of data centers, which provide the ability to shift an entire data center’s workload to other facilities.

Google Slashes CapEx Even Further

Google (GOOG) spent just $139 million on its data centers in the second quarter of 2009, as it further squeezed capital spending that was already at historic lows. On one level, the company’s ability to reduce infrastructure spending reflects well on Google’s capacity planning during its 2007 building boom, when it announced four new data centers. It’s also part of a broader effort by Google to reduce spending uring the current economic slowdown.

The second quarter capital expenditure (CapEx) total was dramatically lower than the company spent during the same period in 2006 ($690 million), 2007 ($597 million) and 2008 ($698 million).

Google’s PUE Numbers Slip Slightly

Many companies have released information about their data center energy effiency using the Power Usage Effectiveness (PUE) metric. But none have received more scrutiny than Google, whose exceptional PUE numbers have been widely discussed in the data center industry. The latest data from Google show that the company’s numbers can go up as well as down.

Google’s quarterly energy-weighted average PUE rose to 1.20 in the second quarter, up from 1.19 in the the first quarter of 2009. The best-performing facility had a PUE of 1.15 for the quarter, compared to 1.11 for the most efficient site in Q1.

The Apple-Google Data Center Corridor

Google and Apple may be having their tensions at the boardroom level, as seen in this week’s news that Google CEO Eric Schmidt will resign as a director of Apple. But the two technology giants are aligned in another area: the merits of western North Carolina as a haven for massive Internet data centers.

Apple’s planned $1 billion data center in Maiden, North Carolina is just 25 miles from a huge Google data center complex in Lenoir. The proximity is not an accident, as the Google project in Caldwell County prompted economic development officials in nearby towns to begin pursuing data center development.

That included Catawba County, where economic development director Scott Millar began a concerted effort to attract data centers, a process that culminated in Apple’s decision to build in Maiden. Millar believes the presence of the technology sector’s two marquee names will attract additional projects, establishing the region as a major data center destination.

“I think that from an investment standpoint, now every CIO in the country is forced to look at the merits of the Apple/Google Corridor,” said Millar. “We’re not going to quit here.”

How Google Manages Data Across Data Centers

How does Google manage data across the many data centers the company operates? At the Google i/o Developer Conference in May, Google’s Ryan Barrett discusses “multihoming” across multiple data center locations with large scale storage systems. Barrett’s presentation examines different approaches to multi-homing, with a particular focus on how Google engineers the App Engine datastore. This video runs about 1 hour. (Link via High Scalability, which has additional commentary).

Yes, Gmail Was Down

Google’s Gmail is experiencing an outage this afternoon. What’s remarkable is the volume of commenting and Tweeting (looks like about 1,000 per minute) with users remarking/complaining about the outage, a reminder of how essential Gmail has become for many users.

“We’re aware that people are having trouble accessing Gmail,” Google said on its Twitter account. “We’re working on fixing it.” A post on the Gmail blog added that “if you have IMAP or P0OP set up already, you should be able to access your mail that way in the meantime.” UPDATE: As of 5:15 pm Eastern time, it appears the Gmail web interface is back up for most users.

Gmail also had an extended outage in February when a code change triggered “cascading problems” that overloaded a data center in Europe. For more background on that outage, aliong with a broader overview of Google’s incident management, see our interview with Google senior VP of operations Urs Hoelzle (How Google Routes Around Outages).

Router Ripples Cited in GMail Outage

Google has published an update on this afternoon’s Gmail downtime. “Today’s outage was a Big Deal, and we’re treating it as such,” writes Ben Treynor, Google’s VP of Engineering and Site Reliability Czar. “We’re currently compiling a list of things we intend to fix or improve as a result of the investigation.”

The problem? Treynor says Google underestimated the load that routine maintenance on “a small fraction” of Gmail servers would place on the routers supporting the application. “At about 12:30 pm Pacific a few of the request routers became overloaded and in effect told the rest of the system ’stop sending us traffic, we’re too slow!’,” Treynor wrote. “This transferred the load onto the remaining request routers, causing a few more of them to also become overloaded, and within minutes nearly all of the request routers were overloaded.”

All Quiet at Google’s Oklahoma Data Center

Amazon isn’t the only cloud builder that has idled a data center project, as the recession has led Google to scale back its capital investments in data center infrastructure. The most visible sign of this is the postponement of the company’s planned data center project in Pryor, Oklahoma. The $600 million facility was scheduled to be completed in early 2009, but last fall the company said it would be delayed and may come online sometime in 2010.

A reader recently drove by the Google site at the Mid-American Industrial Park in Pryor and took some photos of the site. There’s no construction activity at present, and the site appears unchanged from the date of the stoppage. At that time, Google had already done some work on the existing structure that will serve as the first data center at the location, where Google owns 800 acres of land for future expansion.

Facebook, Google Ready for Faster Ethernet

As Facebook was announcing that it has reached 300 million users last week, network engineer Donn Lee was making the case for faster Ethernet networks – definitely 100-Gigabit Ethernet, and perhaps even Terabit Ethernet.

It is “not unreasonable to think Facebook will need its data center backbone fabric to grow to 64 terabits a second total capacity by the end of next year,” Lee told attendees of a seminar hosted by the Ethernet Alliance, an industry consortium focused on the expansion of Ethernet technology.

10 Gigabit Ethernet is currently the fastest rate available, although higher capacity can be achieved by aggregating 10 Gigabit links. The Ethernet Alliance event last week brought together networking developers working on the IEEE 802.3ba standard for 40/100 Gigabit Ethernet. Presentations from Facebook’s Lee and Google network architect Bikash Koley offered a deep dive into network technologies, architecture and the future of Ethernet standards. The next phase of development envisioned is Terabit Ethernet, although some stakeholders see 400 Gigabit Ethernet as a likely interim step.