Wednesday, November 11, 2009

Inside A Google Data Center

Google provided a look inside its data center operations at the Google Data Center Efficiency Summit held Wednesday in Mountain View, Calif. The presentations included a video tour of a Google data, Click Here to see the video.

crac data center

The size of your data center and the number of servers justifies an uncompromising design. Computer Room Air Conditioners (CRACs) are precisely what the name implies: air conditioners designed specifically for the needs and demands of the computer room. I'm assuming from the wording of your question that the roof-mounted Air Handling Units (AHUs) that you are also considering are conventional units designed for the standard office environment, and probably run off the building central system. A data center should be as independent as possible.

It is unlikely that standard rooftop AHUs will maintain the close tolerance temperature and humidity control you should have in a high-availability data center environment, which it appears you intend to have if you are staffing the NOC 24x7. (Incidentally, the NOC should be provided with a normal office environment to make it more comfortable and controllable for the occupants.) Standard AHUs are normally designed to handle more latent heat (evaporation of moisture from human activity), whereas CRACs are designed to humidify while also handling mostly sensible heat (the dry heat you feel or sense coming off of computer hardware).

You don't identify where in the country you are located, but if you are in the North, roof-mounted AHUs can present some operational and maintenance problems in deep winter, and if you're in the South, they may not provide the level of humidity control you need. With today's concerns about energy efficiency, you will probably also find CRACs to be more economical in the long run, particularly if you are in a part of the country where you can take advantage of winter temperatures to utilize "free cooling."

The raised floor question is one that is widely debated these days. I still prefer a raised floor in most situations, particularly if I have enough building height to utilize the floor for air delivery. (That means 18-inches minimum, and preferably 24-30 inches. It also means controlling obstructions under the floor, including piping, power and cable.) With roof-mounted AHUs, you're not going to deliver the air to the floor, so it becomes a matter of personal preference and budget. I still like to have power and permanent cable infrastructure under the floor if I can, but others have different opinions.

If you can't make a raised floor high enough to use it for efficient air delivery, then, whether you use roof-mounted AHUs or CRACs, you will be delivering air from overhead. This can certainly be done, and can be done well, but it requires more design than simply blowing cold air into the room. Warm air rises, so dumping cold air in from above in the closely-coupled "Hot Aisle/Cold Aisle" design of a data center essentially contradicts the laws of physics since the warm air will rise and mix with the cold. Either solution will require well-designed ducting to cool and operate efficiently. Therefore, this is probably easier to do, and certainly less space consuming, with roof-mounted AHUs for which the return is already at the ceiling. On "Top Blow" CRACs the return air intake is at the lower front or back of the unit, which presents greater duct design problems. In my opinion, however, unless other factors preclude it, I would opt for CRACs in an important facility every time.

If I'm interpreting your question correctly, you are asking if you should use enclosures with fans mounted on the rear doors, blowing into the hot aisles. I consider most of the "fan boosted" solutions to be means of addressing problems caused by a poor basic cooling design. I say "most" because there are cabinets that are truly engineered to support higher density loads than can be achieved with high-flow cabinet doors alone, even in a well designed facility. But these cabinets generally duct the hot air to an isolated return path – usually a plenum ceiling – so they are solving more than just an air flow problem; they are also preventing re-circulation of hot air, which in itself makes a big difference. Remember, however, that fans will try to pull air out of the floor or cold aisle in the quantity they were designed for, and this may air-starve other equipment farther down the row. (Variable speed fan control helps, but if heat load is high the fans will still try to pull maximum air.) A data center is a complete "system," and you can't just insert a "solution" into the middle of it without affecting things elsewhere. There is simply no "magic bullet" for the problem of high-density cooling.

chiller data center

A data center chiller is a cooling system used in a data center to remove heat from one element and deposit it into another element. Chillers are used by industrial facilities to cool the water used in their heating, ventilation and air-conditioning (HVAC) units. Round-the-clock operation of chillers is crucial to data center operation, given the considerable heat produced by many servers operating in close proximity to one another. Without them, temperatures would quickly rise to levels that would corrupt mission-critical data and destroy hardware.

The development of powerful chillers and associated computer room air conditioning (CRAC) units has allowed modern data centers to install highly concentrated server clusters, particularly racks of blade servers. Like many consumer and industrial air conditioners, however, chillers consume immense amounts of electricity and require dedicated power supplies and significant portions of annual energy budgets. In fact, chillers typically consume the largest percentage of a data center's electricity.

Manufacturers also have to account for extreme conditions and variability in cooling loads. This requirement has resulted in chillers that are often oversized, leading to inefficient operation. Chillers require a source of water, preferably already cooled to reduce the energy involved in lowering its temperature further. This water, after absorbing the heat from the computers, is cycled through an external cooling tower, allowing the heat to dissipate. Proximity to cold water sources has led to many major new data centers being sited along rivers in colder climates, such as the Pacific Northwest. The chillers themselves, along with integrated heat exchangers, are located outside of the data center, usually on rooftops or side lots.

Manufacturers have approached next-generation chiller design in a number of ways. For large-scale systems, bearingless designs significantly improve power utilization, given that the majority of chiller inefficiency results from energy lost through friction in the bearings. Smaller systems use SMART technologies to rapidly turns a chiller's compressor on and off, letting it work efficiently at from 10% to 100% of capacity, depending on the workload. IBM's "Cool Battery" technology employs a chemical reaction to store cold.

To maintain uptime, data center managers have to ensure that chillers have an independent generator if a local power grid fails. Without a chiller, the rest of the system will simply blow hot air. While any well-prepared data center has backup generators to support servers and other systems if external power supplies fail, managers installing UPS and HVAC systems must also determine whether a facility provides emergency power to the chiller itself. Data center designers, for this reason, often include connections for an emergency chiller to be hooked up. Multiple, smaller chillers supplied with independent power supplies generally offer the best balance of redundancy and efficiency, along with effective disaster recovery preparation. As recent major outages at hosting providers like Rackspace have demonstrated, however, once knocked offline, chillers may take too long to cycle back up to protect data centers, during which time servers can quickly overheat and automatically shut down.

data center economizer

An economizer is a mechanical device used to reduce energy consumption. Economizers recycle energy produced within a system or leverage environmental temperature differences to achieve efficiency improvements.

Economizers are commonly used in data centers to complement or replace cooling devices like computer room air conditioners (CRACs) or chillers. Data center economizers generally have one or more sets of filters to catch particulates that might harm hardware. These filters are installed in the duct work connecting an outside environment to a data center. Outside air also must be monitored and conditioned for the appropriate humidity levels, between 40% and 55% relative humidity, according to ASHRAE.

There are two versions of the device used in data centers: air-side economizers and water-side economizers.

* Airside economizers pull cooler outside air directly into a facility to prevent overheating.
* Water-side economizers use cold air to cool an exterior water tower. The chilled water from the tower is then used in the air conditioners inside the data center instead of mechanically-chilled water, reducing energy costs. Water-side economizers often operate during nightime to take advantage of cooler ambient temperatures.

Economizers can save data center operators substantial operating costs. According to GreenerComputing.org, economization has the potential to reduce the annual cost of a data center's energy consumption by more than 60 percent. Use of cooler external environmental temperatures to preserve hardware is an important component in sustainable green computing practices in general. Unfortunately, economizers are only useful for data centers located in cooler climates.

data center 2009

Data Center 2009 Conference series offer fresh guidance on how to turn today's improvements in IT infrastructure and process efficiency into tomorrow's business advantage.

This MUST ATTEND Conference & Showcasing will strive to provide a comprehensive agenda for those in the data center domains gain a deep insight on managing the technological and business aspects. Topics will be built around critical factors such as people, technologies, processes, and data center facilities, value proposition of the business, and how to effectively manage the impending transitions.

cisco data center

Most enterprises have been exploring cloud computing to see how it might work for them. Cloud computing offers the ability to run servers on the Internet on demand. The storage, compute, and network functions are positioned and ready for use, so servers can be deployed within minutes, and paid for only for as long as they are in use.

An essential component of any cloud installation is its network. When servers are deployed in a cloud, they need an external network to be usable. The network services that they need are more than simple IP connectivity, and each customer of the cloud will need some customization. Here are some key types of cloud network service.

data center virtualization

Cisco Data Center 3.0 comprises a comprehensive portfolio of virtualization technologies and services that bring network, compute/storage, and virtualization platforms closer together to provide unparalleled flexibility, visibility, and policy enforcement within virtualized data centers:

* Cisco Unified Computing System unifies network, compute, and virtualization resources into a single system that delivers end-to-end optimization for virtualized environments while retaining the ability to support traditional OS and application stacks in physical environments.
* VN-Link technologies, including the Nexus 1000V virtual switch for VMware ESX, deliver consistent per-virtual-machine visibility and policy control for SAN, LAN, and unified fabric.
* Virtual SAN, virtual device contents, and unified fabric help converge multiple virtual networks to simplify and reduce data center infrastructure and total cost of ownership (TCO).
* Flexible networking options to support all server form factors and vendors, including options for integrated Ethernet and Fibre Channel switches for Dell, IBM, and HP blade servers, provide a consistent set of services across the data center to reduce operational complexity.
* Network-embedded virtualized application networking services allow consolidation of remote IT assets into virtualized data centers.

vmware data center

Microsoft archrival VMware announced this week a set of future technologies called the “Virtual Datacenter OS” that some are comparing with Windows Server 2008.

But Windows Server 2008 and its integrated Hyper-V hypervisor aren’t what Microsoft is going to be pitting against VMware, Google and Amazon in the brave, new datacenter-centric world. Instead, Microsoft’s soon-to-be-unveiled “Zurich” foundational services and its “RedDog” system services are what the Redmondians will be fielding against its cloud competitors.

Not so different, are they? (VMware, like Microsoft, is talking about spanning both “on-premise” and “cloud” datacenters. Not too surprising when you remember from where VMware CEO Paul Maritz hails….)

Microsoft is expected to detail at least the mid-tier — the Zurich services — at its Professional Developers Conference in late October. I’m hearing most at least some of Microsoft’s Zurich deliverables are slated to be released in final form by mid-2009.

At the base level, down at the substrate which Microsoft previously has described as its “Global Foundation Services” layer is where I’m betting RedDog will fit in. These are services like provisioning, networking, security, management, and operations. (Virtualization fits in here, as well, in the context of helping users migrate between the cloud and on-premise and vice versa.) RedDog has been described as the horizontal “cloud OS” that Microsoft is building to power datacenters.

At the next level, “Zurich” — which Microsoft also has described as the Live Platform services layer — Microsoft will deliver federated identity, data synchronization, workflow and “relay services.” I’ve been hearing a bit more lately about Relay, which I believe Microsoft also has called “Overlay” services.

Overlay is a peer-to-peer network that will help bridge distributed, parallel systems. Supposedly, Overlay will help Microsoft do everything from load balancing and replication of application states across the network of machines, to providing a discovery framework for Web services to make use of presence and federation services. Elements of the overlay network are expected to be part of .Net 4.0, the next version of Microsoft’s .Net Framework.

Bottom line: Don’t think Windows Server 2008 is the be-all/end-all of Microsoft’s datacenter OS story. There is lots more that the company still won’t discuss publicly, but is known to be happening in the background.

disaster recovery data center

We’re starting to see some interesting case studies for servers in shipping containers. In a profile of Revlon CTO David Giambruno, ComputerWorld has some details on Revlon’s use of distributed data center containers in its disaster recovery network. Here’s an excerpt:

“Rather than have parallel datacenters and SANs in various countries, Giambruno’s plan put high-capacity storage at five sites across the world, consolidating data and applications at its U.S. datacenter. Using the same shipping system as for its cosmetics manufacturing, Revlon sent out five pre-loaded “Mini Me” datacenter containers to its four other IT centers, creating a global disaster recovery network of identical systems that assured resources would work when moved. These Mini Me datacenters have the SAN, storage, and severs for both local operations and can support external fail-over from other locations if needed.

Revlon says this approach reduced its datacenter power consumption by 72 percent asnd cut disaster recovery costs in half, as well as dramatically reducing the time required to back up 6.5 terabytes of data each week. Read ComputerWorld for more.

data center energy efficiency

EPA ENERGY STAR Program released to Congress a report assessing opportunities for energy efficiency improvements for government and commercial computer servers and data centers in the United States.

The study projects near-term growth in energy use of U.S. computer servers and data centers, assesses potential cost and energy savings related to computer server and data center energy efficiency improvements, and recommends potential incentives and voluntary programs to promote energy-efficient computer servers and data centers. The report complements EPA’s ongoing efforts to develop new energy efficiency specifications for data servers, including market and technical research, industry collaboration, and explorations into a new ENERGY STAR buildings benchmark for data centers which reflects whole building operations.

The report recommends a mix of programs and incentives, as well as a holistic approach to achieve significant savings. Recommendations include:

* Collaborating with industry and other stakeholders on the development of a standardized whole-building performance rating system for data centers.
* Federal leadership through implementation of best practices for its own facilities.
* Development of ENERGY STAR specifications for servers and related product categories.
* Encouraging electric utilities to offer financial incentives to facilitate datacenter energy efficiency improvements.
* The Federal government should partner with industry to issue a CEO Challenge, for private-sector CEOs to assess and improve the energy efficiency of their data centers.
* Public/private research and development into advanced technology and practices including computing software, IT hardware, power conversion, heat removal, controls and management, and cross-cutting activities.

national data centre pune

National Data Centre (NDC) is the sole custodian of all Meteorological Data being collected from various parts of India. The data are available for more then 125 years. The mandate is to preserve quality controlled data and supply for Weather Prediction, Aviation, Agriculture, Environmental studies, Oceanography and Shipping and Researchers of various Institutions and Universities.

google data api

Google's mission is to organize the world's information and make it universally accessible and useful. This includes making information accessible in contexts other than a web browser and accessible to services outside of Google. As an end-user or a developer, you are the owner of your information, and we want to give you the best tools possible to access, manipulate, and obtain that information.
Why AtomPub?

Web syndication is an effective and popular method for providing and aggregating content. The Google Data Protocol extends AtomPub as a way to expand the types of content available through syndication. In particular, the protocol lets you use the AtomPub syndication mechanism to send queries and receive query results. It also lets you send data to Google and update data that Google maintains.
Why JSON?

JSON, or JavaScript Object notation, is a lightweight data interchange format in widespread use among web developers. JSON is easy to read and write; you can parse it using any programming language, and its structures map directly to data structures used in most programming languages. Within the Google Data Protocol, JSON objects simply mirror the Atom representation.

microsoft data center

Though the building alone covers a whopping 11 acres, you can't even see Microsoft's new $550 million data center in the hills west of San Antonio until you're practically on top of it. But by that point, you can hardly see anything else.

A "spine" of wires and pipes supplies power, cooling and other vital resources throughout Microsoft's Chicago data center, which is under construction.
(click for image gallery)

These days, the massive data center is a bustling construction zone where visitors have to wear hardhats, helmets, orange safety vests, goggles and gloves. By September, it'll be the newest star in Microsoft's rapidly expanding collection of massive data centers, powering Microsoft's forays into cloud computing like Live Mesh and Exchange Online, among plenty of other as-yet-unannounced services. Pulling in, visitors are stopped by Securitas guards who check IDs and ask if they work for Microsoft. An incomplete gate marks the way. Microsoft's general manager of data center services, Mike Manos, won't say exactly what security measures will be in place when the data center opens, but won't rule anything out. "Will the gates be able to stop a speeding Mack truck?" I ask. "Or more," he responds. "Will you have biometrics?" "We have just about everything."

As the car rounds the bend beyond the gate, the building sweeps into full view. The San Antonio data center building itself is 475,000 square feet, or about 11 acres. It's a 1.3 mile walk to circumnavigate the building. To get a perspective on that, it's one building that's the size of almost 10 football fields laid out side-by-side, or 1/10th the floor space of the entire Sears Tower, covered with servers and electrical equipment. "I thought I understood what scale looked like," Manos says.

When the San Antonio data center was under peak construction, 965 people were working full time to build it, with more than 15 trucks of material coming and going each day in order to get the job done in 18 months from scouting the site to opening up. The facilities were built with continuous workflow of materials in mind, even after the site's completion.

As one walks toward the data center's main entrance, a feature that stands out is a row of several truck bays much like would be seen in an industrial park. Trucks pull up and leave servers or other materials inside the bays or "truck tracks," to be picked up and inventoried in the next room and then moved to storage or deployment.

Most everything in the data center is functional. On the small scale, wainscoting-like pieces of plywood cover the bottom of hallway walls to protect both the walls and servers and other equipment moving back and forth. On the large scale, San Antonio is actually two data centers side by side to separate business risk. "One side could burn down and the other one could continue to operate," Manos says.

The components inside are just as gargantuan as those on the outside. Seven massive battery rooms contain hundreds of batteries and 2.7 mW of back-up power apiece. Very few industrial sites, among them aluminum smelters, silicon manufacturers and automobile factories, consume as much energy as mega data centers of the order Microsoft is building.

yahoo data center

Companies like Google and IBM are trying to lead the world in cutting-edge, efficient data centers. Not to be outdone, on Tuesday Yahoo! announced they're hoping to change to future of data centers as well. The company unveiled plans to build one of the world's most efficient data centers in Lockport, NY, and the details do sound pretty exciting.

The data center will be powered mainly by hydroelectric power from Niagara Falls, with 90 percent of that energy going towards powering the servers. The center itself will be built to resemble a chicken coop, using 100 percent outside air to cool the servers, a task which typically gobbles up 50 percent of a data center's energy supply. And the company expects the yearly PUE average to be 1.1 or better.

In addition to building this super-efficient data center, the company also committed to reducing the carbon footprint of all their data centers by 40 percent by 2014. They intend to accomplish this through using more renewable energy sources to power their data centers, implementing more efficient building designs and improving the efficiency of the servers themselves.

Another major commitment made in this announcement was that the company would cease purchasing carbon offsets and was aiming to reduce their carbon impact directly through decreasing energy consumption. We would love to hear of more companies relying less on offsets and more on energy-saving improvements.

google floating data center

Google Inc., which has been building out its data center inventory for the past few years, is literally floating its latest idea for the location of such facilities with the U.S. Patent and Trademark Office.

The company filed a patent application for a "water-based data center" detailing a floating data center, complete with an energy supply fed by a wave-powered generator system and a wind-powered cooling system using seawater.

The patent application, published Aug. 28, describes a modular setup that calls for "crane-removable modules" that store racks of computers. The modules would facilitate adding, subtracting and moving the computing power.

The patent application also details tapping water motion to generate power and the ability to configure the system in many different ways, including on-ship and on-shore data centers, various cooling mechanisms, backup systems, and even temporary housing and helicopter pads to support IT maintenance staffers.

Google is not the first to consider alternatives to the power-sucking data centers that it and others are constructing around the globe, to suggest unique locations or to tap the sea for innovative IT ideas.

Both Google and Microsoft Corp. are already using hydroelectric power options in the Northwest.

A couple in Nebraska that lives underground in a 1960s-era Atlas E Missile Silo wants to turn 15,000 square feet of their bunker into a highly secure data center.

And a company called SeaCode Inc. a few years ago proposed Hybrid-Sourcing, a venture that loads a fully staffed luxury liner with software engineers to get around H-1B visa restrictions and provide U.S. businesses with high-end tech workers.

Google officials say there is nothing to announce now regarding its water-based data center idea.

"We file patent applications on a variety of ideas that our employees come up with. Some of those ideas later mature into real products, services or infrastructure; some don't. We do a lot to make our infrastructure scalable and cost-efficient," a company spokesman said in response to an e-mail.

The idea, however, is fully outlined in the patent application.

Google says computing units could be mounted in shipping containers, which could be stored on ships or floating platforms and loaded and unloaded via equipment already used in shipping ports.

The computers in the containers, or "modules," could easily be replaced or updated as technology advances and adverse sea conditions exact their toll.

Proposed configurations include putting the modules on land next to a body of water.

Water is key for generating power, according to the patent, which cites the use of Pelamis machines and other devices such as wind generators to create energy.

The Pelamis machines use a series of hydraulics powered by water motion to drive motors connected to electrical generators. Other devices, such as a floating power-generation apparatus, use tethers and a spring-loaded hub to gather power from the rise and fall of water levels.

google ocean data center

Google Maps (GM) on the web and Google Earth (GE) as a 3D interactive atlas software application are ideal tools for sharing geographical information in a simple way.
GE as a mass-market visualization product is definitely a new step in the evolution of mapping and GIS, especially in the way it can be used with a couple of mouse clicks by anybody not expert in cartography.

So this webpage is the MIS contribution to this unique method of information gathering to share detailed information in the marine domain waiting Google 3D Maps for Oceans (may be with Google Oceans as a product name, but name can change on its launch -probably on the 2nd of February, 2009-), the future release of the Google project under research to create visualization tools for marine data throughout the world and will enable users to navigate below the sea surface.

google data center pue

oogle continues to improve its energy efficiency, and is telling the industry how it’s doing it. After years of secrecy surrounding its data center operations, Google is disclosing many of its innovations today at the first Google Data Center Efficiency Summit in Mountain View, Calif.

In a morning presentation, Google engineers addressed its Power Usage Effectiveness (PUE) ratings, which have generated discussion within the industry since Google’s disclosed in October that its six company-built data centers had an average PUE of 1.21. That benchmark improved to 1.16 in the fourth quarter, and hit 1.15 percent in the first quarter of 2009, according to Google’s Chris Malone. The most efficient individual data center (described as “Data Center E”) has a PUE of 1.12.

“These are standard air-cooled servers, and best practices is what enabled these results,” said Malone. “What’s encouraging is that we’ve achieved this through the application of practices that are available to most data centers. There’s great potential for all data centers to improve their PUE.”

But there’s also some serious Google magic at work. One of the keys to Google’s extraordinary efficiency is its use of a custom server with a power supply that integrates a battery, allowing it to function as an uninterruptible power supply (UPS). The design shifts the UPS and battery backup functions from the data center into the server cabinet (see our February 2008 story describing this technology). This design provides Google with UPS efficiency of 99.9 percent, compared to a top end of 92 to 95 percent for the most efficient facilities using a central UPS.
Malone addressed the details of how Google measures its PUE, picking up on recent best practices outlined by The Green Grid. Google measures its power use at the power distribution unit (PDU), since it has power tracking on the newer versions of its custom servers, but not all of them. It takes measurements on a continuous basis in four of its six PUE-rated data centers, and on a daily basis in the other two.

Malone said he expects Google’s PUE to head even lower. “A reasonable target can be 1.10,” said Malone, who said Google has designed its new data center in Belgium to operate without any chillers, which are traditionally the most energy-hungry pieces of data center gear. Google makes extensive use of free cooling, which uses outside air rather than air conditioners to keep the facility cool. But the Belgium site will be the first one to forego chillers entirely, a tactic that is only possibly in areas with a particular temperature range.

Urs Holzle, Google’s director of operations, said the growing interest in data center efficiency was a key factor in Google’s decision to share more about its operations. “We were really encouraged by the renewed industry interest,” said Holzle. “There wasn’t much benefit of preaching efficiency when not many people were interested.

“We’re proud of our results, but the point isn’t where we are, but where everyone can be,” he added. “We all need to improve our data centers. There’s a really a lot of low-hanging fruit.”

And then there’s Google’s custom servers and UPS design, which was on display. Holzle was asked whether he expected to see server vendors introduce similar designs. “I think the on-board UPS is a natural for many applications,” he said. “We have patents, including some that have been published. We’d definitely be willing to license them to vendors.”

We’ll have more updates later today from the afternoon sessions, and much more coverage in days to come.

quincy data center

You’re unlikely to ever see the inside of a Google data center. But not so for Microsoft, which recently allowed the BBC to film inside its data center in Quincy, Washington. The BBC’s Rory Cellan-Jones provides a brief video tour of the 470,000 square foot facility in central Washington state, which houses the equipment powering Microsoft’s new Windows Azure cloud developer platform.

data center efficiency

Data center efficiency is critical to business needs as computer requirements grow, density increases, and power and cooling demands climb. Intel is helping the industry address areas where data centers consume power including power conversion and distribution, cooling, and even lighting.

To do this, Intel is focusing on data center performance metrics and delivering instrumentation to improve energy efficiency at the processor, platform, and data center levels.

In collaboration with industry standards organizations such as the Standard Performance Evaluation Corporation (SPEC) and The Green Grid, Intel is helping to develop a more reliable metric that measures energy consumption and performance for all levels of utilization-idle to peak-that a server would experience in a typical day, week or month.

rackspace data center

We know that every second your network is down you're losing opportunities, revenue and the confidence of your users and visitors. So we decided to do something about it. We designed and built the Zero-Downtime Network to minimize downtime and we are so confident about its capabilities that we will give you money back if it goes down. And it works so well that we guarantee it.

cloud computing data center

Yesterday at Microsoft’s 10th annual Microsoft Management Summit 2009, Microsoft gave the world a glimpse of how the datacenter will be evolving from today, into tomorrow, and into the area of cloud-based computing. Microsoft’s key focus was around the virtualized environments of tomorrow’s datacenters, and how a plethora of mobile wireless devices will connect to the cloud.

Brad Anderson, general manager of the Management and Services Division at Microsoft, said regarding today’s datacenter, “IT is faced with a growing set of end-user demands and business-critical expectations around service availability [through] familiar tools that provide a complete view of service levels, applications, infrastructure and clients.” However, the manner in which this will be delivered to customers is changing.

facebook data center

If your users are uploading 40 million photos a day, what does your data center look like? A new video seeking to recuit engineers for Facebook provides a glimpse of one of the social network’s data center facilities, along with some facts about the social network’s amazing growth. The company’s data centers store more than 40 billion photos, and users upload 40 million new photos each day – about 2,000 photos every second. Not surprisingly, the racks are packed. The facility is using a raised-floor design, with no containment but generous spacing between racks.

data center cooling

The cooling infrastructure is a significant part of a data center. The complex connection of chillers, compressors and air handlers create the optimal computing environment, ensuring the longevity of the servers installed within and the vitality of the organization they support.Yet, this current ecosystem has come at a price. The EPA's oft-cited 2007 report predicted that data center energy consumption, if left unchecked, would reach 100 million kWh by 2011 with a corresponding energy bill of $7.4 billion. This conclusion, however, isn't strictly based on Moore's Law or the need for greater bandwidth. Their estimate envisions tomorrow's processing power will be addressed with yesterday's cooling strategies. The shortcomings of these designs, coupled with demand for more processing power, would require (10) new power plants to provide the juice for it all, according to the report. In light of this news, many industry insiders are turning a critical eye toward cooling, recognizing both the inefficiencies of current approaches and the improvements possible through new technologies. The information contained herein is designed to assist the data center professional who, while keeping uptime and redundancy inviolate, must also balance growing demand for computing power with pressure to reduce energy consumption.

data center servers

While research in TCP/IP processing has been under way for several decades, the increasing networking needs of server workloads and evolving server architectures point to the need to explore TCP/IP acceleration opportunities. Researchers at Intel Labs are experimenting with mechanisms thataddress system and memory stall time overheads. They are also studying the effects of interrupt and connection level affinity on TCP/IP processing performance.

Further, they have begun exploring mechanisms to support latency-critical TCP/IP usage models such as storage over IP and clustered systems. The goal is to identify the right level of hardware support for communication on future CMP processors and server platforms.

data center pue

Energy efficiency is rapidly becoming a key data center issue. The more energy efficient the data center, the lower the on-going operating costs for facilities’ users. In order to aid potential customers to compare the energy efficiency of a Digital Realty Trust data center to those of other firms that may be considered we have begun to publish the PUE number as a data center benchmark for each facility that we are building in 2008 going forward.

google search data

With Google Insights for Search, you can compare search volume patterns across specific regions, categories, time frames and properties. See examples of how you can use Google Insights for Search.

google data center

Google will soon be publishing videos of many of the sessions at its Google Data Center Efficiency Summit held Wednesday in Mountain View, Calif. In the meantime, here’s a sneak preview with outtakes from the video tour of a Google data center, which showcased the company’s use of shipping containers to store servers and storage. Each of these 40-foot data center containers can house up to 1,160 servers, and Google has been using them since it built this facility in late 2005. The summit was a Google event for industry engineers and executives to discuss innovations to improve data center efficiency.

google datacenter search

Search the SERPs (Search Engine Result Pages) in Google Datacenters There are many many different Google datacenters. Each of these has the potential to respond with different results for the same search query. Most times, seeing results that vary from datacenter to data center, means that Google is in the process of updating their index, visit Pagerank Checker

datacenter pagerank check

In the Search Engine Optimization world is Google PageRank one of the main indicators of your SEO Progress. This search engine optimization tool checks the PageRank on the mayor Google datacenters. Most times it indicates that Google is in the middle of a PageRank update when the results vary from datacenter to datacenter, visit Pagerank Checker

Google datacenter check

Check a website's Google PageRank on major Google datacenters instantly, visit Google datacenter check

google datacenter

Google’s data centers are the object of great fascination, and the intrigue about these facilities is only deepened by Google’s secrecy about its operations. We’ve written a lot about Google’s facilities, and thought it would be useful to summarize key information in a series of Frequently Asked Questions: The Google Data Center FAQ.



Why is Google so secretive about its data centers?
Google believes its data center operations give it a competitive advantage, and says as little as possible about these facilities. The company believes that details such as the size and power usage of its data centers could be valuable to competitors. To help maintain secrecy, Google typically seeks permits for its data center projects using Limited Liability Corporations (LLCs) that don’t mention Google, such as Lapis LLC (North Carolina) or Tetra LLC (Iowa).

How many data centers does Google have?
Nobody knows for sure, and the company isn’t saying. The conventional wisdom is that Google has dozens of data centers. We’re aware of at least 12 significant Google data center installations in the United States, with another three under construction. In Europe, Google is known to have equipment in at least five locations, with new data centers being built in two other venues.

Where are Google’s data centers located?
Google has disclosed the sites of four new facilities announced in 2007, but many of its older data center locations remain under wraps. Much of Google’s data center equipment is housed in the company’s own facilities, but it also continues to lease space in a number of third-party facilities. Much of its third-party data center space is focused around peering centers in major connectivity hubs. Here’s our best information about where Google is operating data centers, building new ones, or maintaining equipment for network peering. Facilities we believe to be major data centers are bold-faced.