Reducing the environmental footprint of data centers

Author:
Patrick Le Fèvre, Ericsson Power Modules

Date
12/31/2014

 PDF
porn porntube
Power optimization ensures servers will be ready for lifecycle assessment

A pilot scheme launched by the European Commission in 2013 is likely to lead to a set of guidelines that will help reduce the environmental footprint of data centers. The Product Environmental Footprint (PEF) pilots will harmonize industry regulations across the European Union and include projects that look at IT equipment such as servers and the power supplies that support them (see Figure 1).

Click image to enlarge

Figure 1: The Product Environmental Footprint pilots include projects that look at high performance IT equipment used within data centers.

Product Environmental Footprint Category Rules (PEFCR) being defined by the European Commission and leading manufacturers will provide product category specific, lifecycle‐based rules for design and production, aiming to provide a more detailed and complete treatment of environmental impact than the relatively simply operational-efficiency calculations today.

Lifecycle assessments take into account not just the energy consumed during product operation but the “embodied” energy and resource consumption of manufacture, installation, decommissioning and recycling. The use of lifecycle analysis provides both vendors and users with a more informed view of the environmental impact of the overall system. The result will be a more holistic view of the decisions that matter in the creation of green data centers and the IT equipment that goes into them. For example, a product with high operational efficiency but low lifetime reliability or which is based on a less sustainable design may not provide the basis for a greener product line.

The key for manufacturers and integrators is to minimize each of the impact variables to ensure a balanced approach to environmental impact. The power supply architecture is a key part of ensuring the right balance, minimizing both operational energy usage and embodied resource demand. This entails the use of a power-supply infrastructure that supports the real-world demands of data-center servers and which works with today’s high-density computer architectures, which make intensive use of mobile, virtualized workloads and highly adaptive energy-saving modes.

Addressing the needs of these servers requires not just advanced power-converter design but experience in optimizing at the system lever. Within the server, advances in integration have made it not just possible to pack multiple processor cores but support logic as well into just one system-on-chip (SoC) device, with multiple SoCs deployed on each blade. The intermediate bus converters, commonly known as ‘bricks’, that supply power to the blades have had to increase their capacity.

Just two decades ago, 150W was the realistic limit for the brick class of converter (see Figure 2). Today, even quarter-brick converters, which take up just 21cm2 of board space can sustain up to 864W of power and pushing to more than 1kW. Soon, 3kW bricks will be needed.

Click image to enlarge

Figure 2: Just two decades ago, 150W was the realistic limit for the brick class of converter. Today, even quarter-brick converters, which take up just 21cm2 of board space can sustain up to 864W of power and pushing to more than 1kW.

In this high-density power environment, thermal compatibility is a key consideration. This demands high efficiency in the power conversion circuitry to reduce the amount of waste heat that needs to be vented from the system, and which is one of the main costs of data-center operation. An efficient power-conversion topology is only part of the story for an environmental focus on lifecycle resource usage. Flexible design and attention to the overall power architecture is essential.

 As the primary methods of cooling in servers are conduction and convection, airflow is a vital component of power-subsystem design. Open-frame power supplies have become popular because their design provide improved airflow efficiency – they also use less metalwork and packaging in their design. But their real-world performance depends on operating conditions.

The orientation of an open-frame design with respect to airflow is much more sensitive than with enclosed designs: any of the four possible 2D orientations of a supply could be the best for a given situation (see Figure 3). Flexible design is important to ensure that the right supply is chosen for a given orientation. The result of a careful design decision will not just lead to lower operational costs through cooling but the ability to use less powerful fans and fewer heatsinks, helping to keep production cost and resource usage low.

Click image to enlarge

Figure 3: The orientation of an open-frame design with respect to airflow is much more sensitive than with enclosed designs: any of the four possible 2D orientations of a supply could be the best for a given situation. 

To support the high-current environment and reliability requirements of a multicore server, power supplies will often need to be used in N+1 parallel configurations. Regulation is a key issue in parallel architectures. Non-regulated intermediate bus converters will typically have better efficiency compared with regulated types, but are not suitable for all situations, such as in wide input-range battery-operated systems or where supplies are paralleled. Due to the high risk of continuous over-current in parallel operation, non-regulated IBCs can heat up excessively, reducing energy efficiency as well as drastically reducing reliability and lifetime, which has a knock-on effect on the lifecycle cost.

Isolation will have an impact on efficiency and thermal compatibility. Although fully isolated intermediate and POL converters are available, overuse of these will increase materials cost and reduce efficiency. Careful attention to server design will ensure that the isolation configuration is right-sized for the product.

Further optimizations are possible through the topology of the power converters. The SoCs used in today’s servers employ advanced nanometer processes that not only run at low voltages – some of them operate significantly below 1V – but tune their supply voltage to a fine level of granularity to maximize efficiency. To ensure that their circuitry operates correctly, the voltage rails need to be maintained with very close tolerances, often to less than ±30mV.

Traditionally, power-converter designs have relied on analogue control technologies that need to be tuned through the careful selection of external passive components. Together these components assemble a control loop that remains stable under changing load demands. But it is an inflexible approach. Even small changes in voltage output call for the passives and their placement to be reassessed to provide the best combination of control and efficiency. This is impractical, as it would demand changes to the PCB and components so these changes are not performed.

Digital control provides a more flexible and efficient alternative. It delivers high-efficiency power conversion within the tight voltage tolerances that advanced processors need to handle quickly changing requirements. Digital control makes it possible to improve and optimize dynamic response to changing loads. Despite their advantages, systems designers have tended to avoid the use of digital control loops because their setup can be complex and time-consuming if performed manually.

Software tools provide the way to design, simulate, analyze and configure the voltage regulator. Using a tool such as Ericsson’s DC/DC Power Designer, it is possible to build effective control-loop settings within minutes and take advantage of the increased efficiency of digital control (see Figure 4). The software includes simple tools for the robust design of control loops, together with more advanced design and analysis tools to optimize the dynamic response performance.

Click image to enlarge

Figure 4: Software tools provide the way to design, simulate, analyse and configure the voltage regulator. Using a tool such as Ericsson’s DC/DC Power Designer, it is possible to build effective control-loop settings within minutes and take advantage of the increased efficiency of digital control.

Digital control can also help reduce materials usage through the use of smaller-value passive components that can be used in higher-frequency switched power converters and through optimization of the decoupling capacitance network that advanced low-voltage processors require. The software can, in turn, determine the ideal type and number of capacitors that are needed to meet the stringent load transient requirements of high-density servers.

By taking system-level considerations into account, designers of advanced IT equipment for the data-center can meet the needs of upcoming legislation based on the lifecycle assessment of environmental impact. Power-converter suppliers with the experience of the market and commitment to supporting it can ensure that systems designers are well prepared.

Ericsson Power Modules 

 

 

 

 

 

 

 

 

 

 

 

RELATED