The Future of Powering AI: Redefining Power Flow from Grid-to-Core in Data Centers

Author:
Carl Smith, Global Applications Manager Data Center, Infineon Technologies AG

Date
08/30/2025

 PDF
The rise of AI and the data center challenge

Click image to enlarge

Figure 1: Ever-increasing server rack power consumption

­A recent tweet[1] by Sam Altman, the CEO of OpenAI, suggested that being polite in your chats to ChatGPT costs the company tens of millions of dollars due to the additional computational load to process unnecessary words such as “please” and “thank you”. While this set off a range of opinions on the internet, it brought the focus directly on the continuous and rapidly increasing power demand required by data centers globally, especially the ones providing AI computing services.

As artificial intelligence (AI) accelerates its influence across industries, the infrastructure supporting this transformation faces unprecedented demands. Central to this evolution is the data center, where processors and server racks are reaching new heights in power consumption.

Multiple experts, following the rise of AI, have made predictions on the ravenous appetite for electric power that the data centers hosting these AI servers will have. For example, according to a McKinsey article[2], by 2030 data centers may consume ~7% of the global final energy demand (~130 GW annually), up from about 2% in 2022.

The evolution of server rack architecture

With great power comes great responsibility, but with great power demand comes the need for a great distribution architecture. Before the explosion of AI, a single rack used to consume around 60 kW of power, whereas today a rack power of 100-200 kW is the norm, double the previous generation. However, with AI models evolving every day, such a doubling might occur as quickly as every 18 months, similar to Moore’s law, with next-generation racks being rated for 600 kW and reaching up to 1 MW/rack by the end of the decade.

Modern AI data centers are grappling with escalating power consumption. The traditional architecture, which integrates power delivery, backup power, and IT payload within a single rack, is facing challenges in meeting the demands of high-performance computing. This setup reaches its physical limitations if the server rack power consumption exceeds ~250 kW. These power levels mandate a change in the architecture with a move towards high-voltage DC, which will replace the well‑established 48 V ecosystem to a setup as shown in Figure 2.

Click inage to enlarge

Figure 2: Three key stages of the transition towards HVDC architectures

 

In today’s data center the IT payload, power supply and backup power are housed within the server rack, a configuration that is typically suitable for server power consumption up to 250 kW. The power delivery currently relies on a single-phase AC, with progressively increasing wattage.

From 2027 onwards, when racks surpass 250 kW per rack and up to 500 kW+ per rack, we anticipate a shift towards dedicated sidecar racks for the power delivery and backup power, separate from the IT payload rack and in close proximity. In this scenario, power delivery will transition from single- to three-phase AC. This intermediate architecture already demonstrates significant advantages in terms of scalability and will represent a step towards a more radical change in the data center architecture.

From 2029 and beyond, we expect the data centers to evolve into a fully centralized high-voltage DC power distribution architecture using solid-state transformers and solid-state circuit breakers, enabling efficient power distribution across the entire facility.

Infineon, a leader in power solutions, is at the forefront of this transition, collaborating with industry ecosystem players to develop innovative architectures that support these advancements.

Our grid-to-core approach, serving the needs of today and tomorrow

Infineon has been recognized for over 20 years as a leader in advanced semiconductors for data centers, and is the one-stop-shop for reliable power solutions ranging from the power grid to the processor’s core, addressing the challenging requirements for AI platforms of today and constantly innovating to address the demands of next-generation AI platforms of tomorrow.

Infineon is a full-system provider with solutions for every functional block leveraging a mix of  silicon (Si), silicon carbide (SiC), and gallium nitride (GaN) to maximize power density and efficiency at every conversion step – both outside and inside the data center, be it solid-state transformers, solid-state circuit breakers, energy storage systems (ESS), power supply units (PSU), battery backup units (BBU), intermediate bus converters (IBC), or  voltage regulator modules.

Apart from having robust solutions for the present-day scenario, we are working with industry leaders to come up with the next step of power distribution – a facility-level architecture. Announced at Computex 2025[3], this futuristic GW-scale data center architecture has a centralized power generation setup using the fewest power conversion stages for a practical application.

Click image to enlarge

Figure 3: Proposed facility-level GW-scale power architecture, as presented at Computex 2025

 

In order to support such architecture change, Infineon already offers solutions for high-voltage DC architectures for example high-voltage intermediate bus converters (IBC) designs, and higher‑power single- and three-phase PSUs. We are also constantly innovating to increase the power density of our voltage regulator modules to bring them closer to the processor, ultimately enabling true vertical power delivery.

Infineon's leadership in the industry is rooted in its mastery of the three dominant semiconductor technologies: Si, SiC, and GaN. Historically, Infineon’s silicon products such as the CoolMOS™ family have long set the standard for efficiency and thermal performance for high-voltage and high-power density designs. Now, with CoolSiC™ being suitable for higher-voltage designs and CoolGaN™ being available for solutions requiring higher switching frequencies and even higher power densities, Infineon is able to offer a unique hybrid approach extracting the best of each technology’s strengths. This allows designers to meet the various needs of AI data centers, while offering best-in-class power density, efficiency, and total cost of ownership (TCO).

Improving efficiency at every step of the power conversion chain

As mentioned previously, there are physical limits that become barriers for extracting the maximum efficiency at a component level. Therefore, to improve the system-level efficiency, it is necessary to reduce power delivery network (PDN) losses. Roughly 100 W of power is estimated to be wasted in PDN losses for every 1000 W of power consumption in the GPU. Therefore, data centers need to focus on innovating the entire power delivery path, while also optimizing thermal management. The unavoidable power loss while distributing it from the AC grid to the computing core is manifested as heat, which in turn requires energy to be extracted out of the system. In other words, along with reducing PDN losses, reducing cooling needs at every stage of the power conversion also needs to be prioritized. Even a simple improvement of 0.5-1% in the 1st stage combined with a 4-5% increase in the 2nd stage will result in a huge increase in the system-level efficiency, and therefore in a huge savings in the TCO.

Click image to enlarge

Figure 4: Potential areas of improvement in today’s data center energy efficiency

 

To achieve this, we have identified a few areas where the said improvements can be easily made to achieve a holistic increase in efficiency. The most important lever is the correct use of semiconductor devices, ideally a mix of Si, SiC, and GaN, in both analog and mixed signal circuits to ensure a high power density. Wide-bandgap (WBG) devices from Infineon have advanced packaging for the best thermal management due to features such as chip embedding, top‑side‑cooled (TSC) modules, and devices with integrated magnetics for reducing the BOM.

Next, the system architecture itself needs to be updated with novel converter topologies, which GaN technology especially contributes to. These new topologies, combined with a vertical power delivery flow, can result in a modular and scalable architecture, critical in these times of rapid and continuous expansion. Lastly, hardware by itself will not be sufficient. Software that enables smart control of new-age digital controllers for hot-swapping, eFuses, and various point‑of‑load devices is equally important in safeguarding uninterrupted data center operations.

Click image to enlarge

Figure 5: Levers to increase power efficiency in AI data centers

 

Working closely with customers has enabled us to understand that solutions are not just about the products but also about reliability, both in terms of performance of the products themselves and their availability. Infineon is a pioneer not only in innovation but also in flexibility of manufacturing resilience. From manufacturing the world’s first 300 mm Si power wafer in 2010, Infineon has progressed from strength to strength to create the world's first 300 mm GaN power wafer in 2024.

To keep up with the blazing speeds of improvement in AI technology, Infineon works in close collaboration with ecosystem partners for the highest level of vertical integration – increasing the chip embedding capacity by 80% within 4 months between January and April of 2025. This has helped us establish a proven track record with a lead time reduction of 35% for samples and ability to ship about 100 million power stage pieces in 5 months (up to April 2025).

In summary, the industry must continue to innovate in power semiconductor technologies and system designs to enable the sustainable growth of AI capabilities in a way that is technically, environmentally, and economically viable. With Infineon’s innovative solutions at the forefront, the journey toward a more energy-efficient and sustainable AI-powered world is underway.

Do you want to dive deeper? Visit

References:

[1]https://x.com/sama/status/1912646035979239430

[2]https://www.mckinsey.com/industries/semiconductors/our-insights/generative-ai-the-next-s-curve-for-the-semiconductor-industry

[3]https://www.computextaipei.com.tw/en/index.html

RELATED