Wednesday, November 11, 2009

Inside A Google Data Center

Google provided a look inside its data center operations at the Google Data Center Efficiency Summit held Wednesday in Mountain View, Calif. The presentations included a video tour of a Google data, Click Here to see the video.

crac data center

The size of your data center and the number of servers justifies an uncompromising design. Computer Room Air Conditioners (CRACs) are precisely what the name implies: air conditioners designed specifically for the needs and demands of the computer room. I'm assuming from the wording of your question that the roof-mounted Air Handling Units (AHUs) that you are also considering are conventional units designed for the standard office environment, and probably run off the building central system. A data center should be as independent as possible.

It is unlikely that standard rooftop AHUs will maintain the close tolerance temperature and humidity control you should have in a high-availability data center environment, which it appears you intend to have if you are staffing the NOC 24x7. (Incidentally, the NOC should be provided with a normal office environment to make it more comfortable and controllable for the occupants.) Standard AHUs are normally designed to handle more latent heat (evaporation of moisture from human activity), whereas CRACs are designed to humidify while also handling mostly sensible heat (the dry heat you feel or sense coming off of computer hardware).

You don't identify where in the country you are located, but if you are in the North, roof-mounted AHUs can present some operational and maintenance problems in deep winter, and if you're in the South, they may not provide the level of humidity control you need. With today's concerns about energy efficiency, you will probably also find CRACs to be more economical in the long run, particularly if you are in a part of the country where you can take advantage of winter temperatures to utilize "free cooling."

The raised floor question is one that is widely debated these days. I still prefer a raised floor in most situations, particularly if I have enough building height to utilize the floor for air delivery. (That means 18-inches minimum, and preferably 24-30 inches. It also means controlling obstructions under the floor, including piping, power and cable.) With roof-mounted AHUs, you're not going to deliver the air to the floor, so it becomes a matter of personal preference and budget. I still like to have power and permanent cable infrastructure under the floor if I can, but others have different opinions.

If you can't make a raised floor high enough to use it for efficient air delivery, then, whether you use roof-mounted AHUs or CRACs, you will be delivering air from overhead. This can certainly be done, and can be done well, but it requires more design than simply blowing cold air into the room. Warm air rises, so dumping cold air in from above in the closely-coupled "Hot Aisle/Cold Aisle" design of a data center essentially contradicts the laws of physics since the warm air will rise and mix with the cold. Either solution will require well-designed ducting to cool and operate efficiently. Therefore, this is probably easier to do, and certainly less space consuming, with roof-mounted AHUs for which the return is already at the ceiling. On "Top Blow" CRACs the return air intake is at the lower front or back of the unit, which presents greater duct design problems. In my opinion, however, unless other factors preclude it, I would opt for CRACs in an important facility every time.

If I'm interpreting your question correctly, you are asking if you should use enclosures with fans mounted on the rear doors, blowing into the hot aisles. I consider most of the "fan boosted" solutions to be means of addressing problems caused by a poor basic cooling design. I say "most" because there are cabinets that are truly engineered to support higher density loads than can be achieved with high-flow cabinet doors alone, even in a well designed facility. But these cabinets generally duct the hot air to an isolated return path – usually a plenum ceiling – so they are solving more than just an air flow problem; they are also preventing re-circulation of hot air, which in itself makes a big difference. Remember, however, that fans will try to pull air out of the floor or cold aisle in the quantity they were designed for, and this may air-starve other equipment farther down the row. (Variable speed fan control helps, but if heat load is high the fans will still try to pull maximum air.) A data center is a complete "system," and you can't just insert a "solution" into the middle of it without affecting things elsewhere. There is simply no "magic bullet" for the problem of high-density cooling.

chiller data center

A data center chiller is a cooling system used in a data center to remove heat from one element and deposit it into another element. Chillers are used by industrial facilities to cool the water used in their heating, ventilation and air-conditioning (HVAC) units. Round-the-clock operation of chillers is crucial to data center operation, given the considerable heat produced by many servers operating in close proximity to one another. Without them, temperatures would quickly rise to levels that would corrupt mission-critical data and destroy hardware.

The development of powerful chillers and associated computer room air conditioning (CRAC) units has allowed modern data centers to install highly concentrated server clusters, particularly racks of blade servers. Like many consumer and industrial air conditioners, however, chillers consume immense amounts of electricity and require dedicated power supplies and significant portions of annual energy budgets. In fact, chillers typically consume the largest percentage of a data center's electricity.

Manufacturers also have to account for extreme conditions and variability in cooling loads. This requirement has resulted in chillers that are often oversized, leading to inefficient operation. Chillers require a source of water, preferably already cooled to reduce the energy involved in lowering its temperature further. This water, after absorbing the heat from the computers, is cycled through an external cooling tower, allowing the heat to dissipate. Proximity to cold water sources has led to many major new data centers being sited along rivers in colder climates, such as the Pacific Northwest. The chillers themselves, along with integrated heat exchangers, are located outside of the data center, usually on rooftops or side lots.

Manufacturers have approached next-generation chiller design in a number of ways. For large-scale systems, bearingless designs significantly improve power utilization, given that the majority of chiller inefficiency results from energy lost through friction in the bearings. Smaller systems use SMART technologies to rapidly turns a chiller's compressor on and off, letting it work efficiently at from 10% to 100% of capacity, depending on the workload. IBM's "Cool Battery" technology employs a chemical reaction to store cold.

To maintain uptime, data center managers have to ensure that chillers have an independent generator if a local power grid fails. Without a chiller, the rest of the system will simply blow hot air. While any well-prepared data center has backup generators to support servers and other systems if external power supplies fail, managers installing UPS and HVAC systems must also determine whether a facility provides emergency power to the chiller itself. Data center designers, for this reason, often include connections for an emergency chiller to be hooked up. Multiple, smaller chillers supplied with independent power supplies generally offer the best balance of redundancy and efficiency, along with effective disaster recovery preparation. As recent major outages at hosting providers like Rackspace have demonstrated, however, once knocked offline, chillers may take too long to cycle back up to protect data centers, during which time servers can quickly overheat and automatically shut down.

data center economizer

An economizer is a mechanical device used to reduce energy consumption. Economizers recycle energy produced within a system or leverage environmental temperature differences to achieve efficiency improvements.

Economizers are commonly used in data centers to complement or replace cooling devices like computer room air conditioners (CRACs) or chillers. Data center economizers generally have one or more sets of filters to catch particulates that might harm hardware. These filters are installed in the duct work connecting an outside environment to a data center. Outside air also must be monitored and conditioned for the appropriate humidity levels, between 40% and 55% relative humidity, according to ASHRAE.

There are two versions of the device used in data centers: air-side economizers and water-side economizers.

* Airside economizers pull cooler outside air directly into a facility to prevent overheating.
* Water-side economizers use cold air to cool an exterior water tower. The chilled water from the tower is then used in the air conditioners inside the data center instead of mechanically-chilled water, reducing energy costs. Water-side economizers often operate during nightime to take advantage of cooler ambient temperatures.

Economizers can save data center operators substantial operating costs. According to GreenerComputing.org, economization has the potential to reduce the annual cost of a data center's energy consumption by more than 60 percent. Use of cooler external environmental temperatures to preserve hardware is an important component in sustainable green computing practices in general. Unfortunately, economizers are only useful for data centers located in cooler climates.

data center 2009

Data Center 2009 Conference series offer fresh guidance on how to turn today's improvements in IT infrastructure and process efficiency into tomorrow's business advantage.

This MUST ATTEND Conference & Showcasing will strive to provide a comprehensive agenda for those in the data center domains gain a deep insight on managing the technological and business aspects. Topics will be built around critical factors such as people, technologies, processes, and data center facilities, value proposition of the business, and how to effectively manage the impending transitions.

cisco data center

Most enterprises have been exploring cloud computing to see how it might work for them. Cloud computing offers the ability to run servers on the Internet on demand. The storage, compute, and network functions are positioned and ready for use, so servers can be deployed within minutes, and paid for only for as long as they are in use.

An essential component of any cloud installation is its network. When servers are deployed in a cloud, they need an external network to be usable. The network services that they need are more than simple IP connectivity, and each customer of the cloud will need some customization. Here are some key types of cloud network service.

data center virtualization

Cisco Data Center 3.0 comprises a comprehensive portfolio of virtualization technologies and services that bring network, compute/storage, and virtualization platforms closer together to provide unparalleled flexibility, visibility, and policy enforcement within virtualized data centers:

* Cisco Unified Computing System unifies network, compute, and virtualization resources into a single system that delivers end-to-end optimization for virtualized environments while retaining the ability to support traditional OS and application stacks in physical environments.
* VN-Link technologies, including the Nexus 1000V virtual switch for VMware ESX, deliver consistent per-virtual-machine visibility and policy control for SAN, LAN, and unified fabric.
* Virtual SAN, virtual device contents, and unified fabric help converge multiple virtual networks to simplify and reduce data center infrastructure and total cost of ownership (TCO).
* Flexible networking options to support all server form factors and vendors, including options for integrated Ethernet and Fibre Channel switches for Dell, IBM, and HP blade servers, provide a consistent set of services across the data center to reduce operational complexity.
* Network-embedded virtualized application networking services allow consolidation of remote IT assets into virtualized data centers.