martes, 7 de abril de 2009

Google Revela el Hardware de su Servidor

Tras un aépoca con mucho ajetreo y que además parece que va a durar, me dispongo a hablar por fin un poco.
Es una noticia que me ha llamado la atención: google habla sobre sus servidores.
Desde luego da una idea de cómo piensan a grnade escala (me recuerda cuando diseñaba placas PCBs (de circuitos) y nos estrujabamos el coco por poner una resistencia menos pues solo era 1 centimo menos pero multiplicado por un numero enorme de placas era mucho margen de beneficio...). Exactamente asi piensan ellos.

También me ha llamado la atención el hecho de que se fijen en la extraccion de aire. Es curioso porque con todas la peleas quehe tenido en CPDs al final ves que te dejas un paston en refrigerar cuando lo más barato, practico y fiable es sacar el aire caliente(extraccion) y meter aire del edificio (impulsion a 22º), ayudado claro con algo de refrigeracion.

Aqui va el texto: http://blogs.zdnet.com/gadgetreviews/?p=2936&tag=nl.e019

Google for the first time on Wednesday revealed the hardware at the core of its Internet operations at a conference about the increasingly prominent issue of data center efficiency, reports CNET’s Stephen Shankland.

Instead of buying hardware from companies such as Dell, Hewlett-Packard, IBM, or Sun Microsystems, Google designs and builds its own. (The company has hundreds of thousands of servers.)

Ben Jai, who designed many of Google’s servers, unveiled the server hardware. The first surprise: each server has its own 12-volt battery to supply power if there’s a problem with the main source of electricity.

Shankland writes:

Why is the battery approach significant? Money.

Typical data centers rely on large, centralized machines called uninterruptible power supplies (UPS)–essentially giant batteries that kick in when the main supply fails and before generators have time to kick in. Building the power supply into the server is cheaper and means costs are matched directly to the number of servers, Jai said.

“This is much cheaper than huge centralized UPS,” he said. “Therefore no wasted capacity.”

Efficiency is another financial factor. Large UPSs can reach 92 to 95 percent efficiency, meaning that a large amount of power is squandered. The server-mounted batteries do better, Jai said: “We were able to measure our actual usage to greater than 99.9 percent efficiency.”

Since 2005, Google’s data centers have been composed of standard shipping containers — each with 1,160 servers and a power consumption that can reach 250 kilowatts, the company said.

Google has been using the design since 2005 and now is in its sixth or seventh generation of design.

“It was our Manhattan Project,” Jai said of the design.

Energy efficiency, power distribution, cooling, and ensuring hot and cool air don’t intermingle are all on the top of Google’s list, the company said.

As for the actual unit, the server was 3.5 inches thick (2U) and had two processors, two hard drives, and eight memory slots mounted on a Gigabyte motherboard. Google uses x86 processors from both AMD and Intel. The battery design is used on its network equipment as well, Jai said in Shankland’s article.

What’s fascinating about all this is that Google operates servers on such an immense scale that every decision it must make in turn has a large affect (and potential cost/savings).

Take the power supply design, for example: Google’s designs supply only 12-volt power, with the necessary conversions taking place on the motherboard. That adds $1 or $2 to the cost of the motherboard, Shankland writes, “but it’s worth it not just because the power supply is cheaper, but because the power supply can be run closer to its peak capacity, which means it runs much more efficiently.” Google even pays attention to the greater efficiency of transmitting power over copper wires at 12 volts compared to 5 volts, Shankland writes.

That kind of attention can translate to big savings in power or cost — or both.