Author:
Patrick Le Fèvre, Ericsson Power Modules
Date
04/29/2015
Software is playing an increasingly prominent role in networking at a system architecture level, but it also has growing importance in the future in the delivery of power. Digital control, the use of power-optimization software algorithms and the concept of the Software Defined Power Architecture are all being seen as part of a brave new future for advanced board-power management.
There is a transformation ongoing in networking that is moving toward new and advanced software-based architectures that are more agile and have the flexibility to meet rapidly changing demands of new services and application and fast evolving traffic patterns. Technologies such as SDN (Software Defined Networking) and NFV (Network Functional Virtualization) are going to play a critical role in emerging network architectures and the next generation of equipment in the world of ICT, datacom and telecom applications, and more specifically in data centers and ‘the cloud’.
In essence, SDN embodies the concept of separating the control plane from the data plane – or the software from the hardware: it acknowledges that the software does not necessarily need to run on specific networking hardware, but elsewhere perhaps on servers in a data center and, more generically, in the cloud. It represents a gradual transition from hardware implemented on a very local level to software running a much more global network.
Added to this, NFV is a concept that offers significant economies of scale and standardization, whereby multiple network functions or applications are consolidated on what are called virtual machines based on commercially available equipment and hardware. While techniques such as virtualization can make a contribution to reducing energy consumption, clearly what is also critical is the implementation of software control at the board level (see Figure 1).
Click image to enlarge
Figure 1 – Software Defined Networking is expected to be implemented in new and advanced networking architectures controlling an increasing multitude of equipment, services and functions
Efficiency limits
New massive data centers are being set up across the world and existing ones expanded to help facilitate the global expansion of the cloud-computing model, comprising centralized data storage and access to computing resources in either public or private domains or in a hybrid of both. A major driving force for this expansion is mobile and the growing pervasiveness of connected devices: by 2020, we can expect commercial deployment of 5G services and a significant increase in video traffic, which is particularly demanding in terms of networking throughput. In addition, there is the paradigm that is the Internet of Things (IoT) with many billions of devices that are expected to be Internet connected, and perhaps largely via telecom networks, in the coming decade or so.
A major challenge for data centers today is to achieve the delivery of power to where it is needed and in a way that means that the energy used by servers is actually proportional to the required workload. It is servers that will be responsible for consuming the majority of the energy in data centers – and approximately 60 percent of this energy is used even when they are idle. Therefore a better understanding of operating characteristics of servers is a necessary way to improve overall efficiency. Certainly energy management software is being increasingly used that provides advanced monitoring of the power usage of server racks, anticipating surges in demand for example, or operating temperature patterns and the necessary cooling required, enabling higher efficiency and savings in energy.
At the board level, however, it is another question. Improvements in power module conversion technologies certainly deliver incremental improvements in efficiency for a wide range of loads. For example, Ericsson’s high-efficiency and highly regulated DC/DC converters offer conversion efficiency levels of approximately 96.3% within the input voltage range for telecoms of 36V to 75V; and efficiencies approach 97% in the narrower 40V to 60V input range in datacoms; and it could reach 98.5% in the very near future, although this will require high levels of technological innovation as well as the implementation of advanced new power supply topologies such as Ericsson’s Hybrid Regulated Ratio, as just one example. However, these improvements in module efficiency will not necessarily make a significant difference in overall system power consumption during low-load conditions in a networking environment.
Evolving digitally
Energy management at the board level has evolved over the past few years, starting from virtually nowhere a decade ago to the latest advanced power systems that make use of digital monitoring and control software to improve conversion efficiency. Digital power fundamentally implements changes in the inner control loop via its PMBus-based measurement-and-control subsystem. In 2008, Ericsson was the first manufacturer of board-mounted power supplies to offer a family of Intermediate Bus Converters (IBCs) and Point-of-Load (POL) DC/DC modules that exploit digital power-control techniques. These digital DC/DC converters and their successors can adapt to changes in line and load conditions due to network traffic demand for example, and in real time.
Digital power can also help with the complexity of modern power distribution systems and the number of different voltage rails required for today’s leading-edge processors, including high-performance multicore IP microprocessors, ASICs, FPGAs and other digital processing ICs, which require highly flexible power solutions that are able to adjust voltage to optimize configuration ranging from 0.6V to 1.8V with excellent response to meet load step transients. Many companies today are using digital power functionalities, often for testing, setup and board configuration, and there are even a leading few that are gaining the full benefits of fully implementing the in-system capabilities offered by advanced digitally controlled power converters.
Software-defined power architecture
An evolution of digital power now emerging is the Software-Defined Power Architecture (SDPA), which has the potential to bring energy-efficient and power-optimized board-level capabilities in networking applications. Conceptually, processors will use software command control to adjust required power levels, delivering more power when computing operations are at full capacity, or adapting performance and behavioral characteristics to reduce overall energy consumption when a processor is only handling limited computational tasks at times of low data-traffic demand. While much is up for discussion in the power industry, the SDPA should include key energy-saving concepts that have been developed, and increasingly being implemented, over the past few years. These include the dynamic bus voltage, adaptive voltage scaling, fragmented power distribution among many others.
The dynamic bus voltage (DBV) is an evolution of the intermediate bus architecture (IBA), which is commonly used in datacom today. Data from Ericsson supports the assertion that the DBV can reduce board power consumption by between 3 and 10%, depending on the application. The IBA employs IBCs to convert, say, the traditional 48V(DC) telecom power line to a static 12 to 14V(DC) supply and feeding a number of POL regulators that supply the final load voltages at the required levels for processors or other logic devices.
While the choice of 12/14V(DC) ensures a high enough voltage to deliver all the power required by the load in times of high data traffic, it can become highly inefficient at time of low-data traffic. The DBV-based architecture therefore provides the possibility to dynamically adjust the power envelope to meet load conditions by adjusting the previously fixed intermediate bus voltage with the use of advanced digital power control, optimized hardware and a series of software algorithms to deliver higher conversion efficiencies.
Adaptive voltage scaling
Adaptive voltage scaling (AVS) has been introduced in the past year or so and is a powerful technique to optimize supply voltages and minimize energy consumption in modern high-performance microprocessors. AVS employs a real-time closed-loop approach to adapt the supply and meet the minimum voltage required for the actual clock frequency and workload of the individual processor; it also adjusts to automatically compensate for process and temperature variations in the processor. Leading-edge high-performance microprocessors will change workload and operating conditions within nanoseconds – therefore real-time regulation of the microprocessor supply puts a high demand on the control-loop bandwidth and requires close monitoring of computing hardware performance in the feedback loop.
Also, Ericsson has recently presented a possible solution called ‘fragmented power distribution’, which implements digital power monitoring and control capabilities to deliver higher levels of efficiency on multi-kilowatt boards. Multiple converters are distributed at strategic locations across the board to create power islands, all communicating via an internal bus such as the PMBus and operating together to share and optimize the delivery of power to loads. Each converter can be a master or slave unit, operating independently or together to deliver full power and depending on where the power is required, allowing the system to perform at the optimum level.
As well as these already introduced technologies, many others are also being proposed within the industry, including: ‘adaptive power allocation traffic scaling’, where certain voltage levels will need to be allocated across highly complex board-level systems to meet data traffic demands; ‘multicore activation on demand’, which can be implemented within a multicore processor or via external software that commands the processor to activate or deactivate a core; and ‘power profile optimization’, whereby a board is preconfigured for different application scenarios with automatic software selection deciding the one best suited for the situation.
Transition Milestone
One recently announced device from Ericsson, the BMR465, has been designed to be a milestone product in the transition to power systems will that implement the SDPA. Mainly targeting enterprise servers, networking equipment and high-end routers deployed in data centers, but also suitable for a wide range of other applications that have high current demands, the BMR465 enables the powering of applications that require high-efficiency and flexible POL conversion over the 90A to 360A range (see Figure 2).
Click image to enlarge
Figure 2 – Ericsson BMR465
A member of Ericsson’s 3E family of digitally controlled DC/DC converters, the module is a 90A digital POL converter that offers the ability to connect modules in parallel to provide up to 360A to advanced network-processors. While it can be operated as a standalone unit, it can also work as part of a larger power system when processor boards require higher current. Built on a two-phase topology, four modules can be connected in parallel together to deliver up to 360A, which can become part of a multi-module and multiphase (up to eight-phase) power system that enables phase spreading, a reduction of peak current and also the amount of capacitors required by end systems. The converter is also fully compliant with PMBus commands and has been integrated into the Ericsson DC/DC Digital Power Designer software, which makes it easy for systems architects to simulate and configure complete multi-module and multiphase systems prior to implementation, thereby gaining valuable time-to-market.
The new module also integrates ‘compensation-free’ modulation techniques, automatically providing stability, accurate line and load regulation and good transient performance for a wide range of operating conditions. Operating from a 7.5V to 14V input, it can operate over a large range of intermediate bus voltages from 8V to 14V, thereby complying with the Dynamic Bus Voltage scheme, which is expected to be part of the SDPA, and reducing power dissipation and saving energy. Likewise, Adaptive Voltage Scaling can also be performed via the PMBus, by adjusting the BMR465 output voltage to the optimized core voltage as required by the processor.
A softer world
Certainly there are many challenges to make the SDPA a reality and there will need to be significant collaboration between power-system architects and board designers in the networking arena, as well as semiconductor vendors and power module manufacturers. But it is an important concept and board power management, like the coming future of networking, will be increasingly software managed and controlled. In the future, it may be possible that the SPDA will evolve to help not only at the board level but also at a system level and handle the energy delivered to many different functions in more efficient ways.