.png)
5 Fast Facts on Compute Efficiency with Flex’s Chris Butler
The rise of AI has brought unprecedented pressure on the power and cooling systems that sustain today’s data centers. Racks that once drew 30kW are now pushing 100kW or more, with 1MW configurations on the horizon. Meeting these demands isn’t just about scaling capacity — it’s about rethinking the entire power delivery chain for maximum efficiency and sustainability.
Flex, a global leader in manufacturing and critical power solutions, is tackling this challenge head-on. From high-voltage DC architectures and 97.5% efficient power shelves to integrated liquid cooling and vertically integrated “grid to chip” solutions, the company is reshaping how data centers operate in the AI era.
In this “5 Fast Facts on Compute Efficiency” conversation, I sat down with Chris Butler, President of Embedded and Critical Power at Flex, to explore how innovations in power, cooling, and manufacturing scale are unlocking new levels of efficiency, and how these breakthroughs could redefine sustainable AI infrastructure in the years ahead.
Q1: Chris, we’re seeing data center power requirements evolve dramatically with AI workloads pushing racks from 30kW to potentially 100kW+ and even toward 1MW configurations. You recently announced a power shelf system achieving 97.5% efficiency at half-load for NVIDIA GB300 NVL72 systems. How is Flex rethinking fundamental power architectures to not just handle these demands, but do so with maximum efficiency? What specific innovations in DC voltage levels and power conversion are proving most impactful?
A1: As AI workloads push data center rack densities higher, data center operators are fundamentally rethinking power architectures to meet energy consumption demands with maximum efficiency, scalability, and sustainability. A broader industry shift toward high-voltage DC architectures, particularly +/- 400 V DC and 800 V DC, has the potential to reduce conduction losses, enable longer cable runs, and minimize the conversion stages required to step power down from the grid, improving system efficiency and reducing thermal management overhead.
Flex collaborates with hyperscalers well in advance of new standards and product introductions to ensure their power architectures are innovation-ready — an example being our recently announced power shelf system that is optimized for NVIDIA GB300 NVL72 platforms. Achieving 97.5% efficiency at half-load, it leverages native 800 V DC input to streamline power conversion and reduce the need for intermediate AC stages. That improves energy efficiency while simplifying infrastructure design, allowing for denser deployments and faster scalability within the same data center footprint.
Q2: Flex has rapidly expanded manufacturing capacity by over 8 million square feet since fiscal 2024, including new facilities in Dallas and Columbia, South Carolina, focused on critical power product manufacturing and assembly, and Poland, which doubled your critical power product manufacturing capacity in Europe. You've stated that “rapid AI adoption across sectors is increasing data center operators’ need for reliable, efficient, and scalable power infrastructure solutions.” How does Flex’s manufacturing scale-up enable efficiency gains beyond just meeting demand? What efficiencies are you achieving in time to market and deployment that directly translate to operational efficiency for your customers?
A2: Proximity matters. With advanced manufacturing facilities in 30 countries, we enable customer regionalization strategies while providing the local expertise and global scale needed to drive competitive advantage. Customers benefit from capabilities and expertise that allow them to shorten the distance between manufacturing and deployment, speeding time to compute — a critical ROI metric for data center operators and investors — reducing their carbon footprint, and enhancing scalability within and between facilities. It also accelerates the delivery of services, from design and engineering support through deployment, installation, refurbishment, and recycling.
Depending on the engagement, operational efficiency may take the form of faster prototyping, reduced downtime, agile deployment, or post-sale value capture, among myriad other benefits. A mosaic of manufacturing facilities across geographies also enhances supply chain resilience, enabling customers to better navigate geopolitical uncertainties, shifting demand, labor shortages, and unforeseeable disruptions. Flex’s manufacturing capacity enables us to meet the insatiable demand for embedded and critical power solutions — not to mention cooling solutions and essential infrastructure such as racks and enclosures — while delivering tangible operational efficiency gains for data center customers worldwide.
Q3: Following Flex’s acquisition of JetCool Technologies, you now offer liquid-cooled racks supporting up to 120kW per rack with a clear upgrade path to 300kW, utilizing JetCool's microconvective cooling® technology. How does integrated liquid cooling improve thermal management and overall data center efficiency?
A3: With data center cooling needs surpassing what can be accomplished with traditional air-cooling systems, liquid cooling has become the go-to choice for dealing with the excessive heat produced by the power-hungry AI and HPC workloads in high-density compute environments. Leveraging our direct-to-chip cooling technology, data center customers enable zero water consumption, over 50 percent decrease in cooling power usage, and an 18 percent decrease in total power consumption, preventing an annual emission of 35 million metric tons of CO2 with widespread adoption.
While AI receives the lion’s share of attention, it’s important to remember that it still accounts for just 14 percent of global data center power usage — which means that the vast majority of data center space is dedicated to CPU-based workloads. To that end, we’re also at the forefront of developing innovative cooling solutions that deliver immediate performance and efficiency improvements without requiring any changes to data center infrastructure. For instance, the standalone JetCool SmartPlate™ System, designed to simplify the adoption of liquid cooling, eliminates the need for facility water — not a priority in air-cooled environments — while delivering an average total IT power savings of 15 percent, enabling customers to maximize compute in power-constrained environments.
Q4: Flex positions itself uniquely with solutions spanning “grid to chip” — from critical power infrastructure through embedded power modules. Your recent analysis suggests new configurations can improve system efficiency by about 20 percent annually per rack. As you look at the complete power delivery chain, where are the biggest efficiency gains being unlocked, and how does your vertical integration approach enable optimizations that wouldn’t be possible with point solutions?
A4: With 1+ MW racks on the horizon, data center operators are rethinking their architectures as power and thermal management requirements escalate. Today, power, cooling, and servers are often fully integrated within the same rack, an innovation that has served the industry well. However, not doing so can (perhaps paradoxically) ease space constraints while imparting a host of other benefits. While power and cooling may be disaggregated into separate “sidecar” racks to increase compute capacity in the IT rack, this still requires an integrated, seamless interplay of systems from grid to chip to extract maximum value.
For instance, a beefier 4,000-amp busbar feeding power into a reconfigured data hall with an end-of-row cooling distribution unit (CDU) can accommodate high-density IT racks and elevate the power architecture to 400V. Flanking the IT racks with standalone power cabinets and CDUs not only increases the amount of space in the IT rack dedicated to compute, it opens up the data hall floorspace considerably. Furthermore, the new configuration can improve system efficiency by about 20 percent — which translates into significant energy savings annually per rack. In large data centers with thousands of racks, the potential savings are clear. Data center operators are looking for partners that have the ability to design and manufacture complete solutions and deploy them at scale worldwide.
Q5: Looking ahead to the next three to five years, AI workloads will continue intensifying power and cooling demands while sustainability pressures mount. You’ve mentioned the potential for a 90 percent reduction in electrical room square footage by 2030, along with significant energy efficiency leaps. What efficiency breakthroughs do you see on the horizon that will be game-changers for sustainable AI infrastructure, and how is Flex positioned to lead in that transformation?
A5: Traditionally, converting incoming AC power to a DC voltage usable at the chip level requires several conversion steps, each of which impacts energy efficiency. But we’re seeing higher DC voltages emerge in the data center, including the 800 V DC that allows direct connection to renewable energy systems and +/- 400 V DC required for the integration of battery energy storage systems (BESS) and microgrid applications.
Condensing power conversion into a single solid-state transformer not only produces efficiency gains, it reduces the square footage required for electrical rooms significantly — by some estimates, up to 90 percent by 2030. This opens up new paths to profitability: saving on construction costs when capacity can be met with less space or increasing compute capacity in the existing envelop by adding more racks. We call this the convergence of power and IT, and it is a welcome step forward.
The rise of AI has brought unprecedented pressure on the power and cooling systems that sustain today’s data centers. Racks that once drew 30kW are now pushing 100kW or more, with 1MW configurations on the horizon. Meeting these demands isn’t just about scaling capacity — it’s about rethinking the entire power delivery chain for maximum efficiency and sustainability.
Flex, a global leader in manufacturing and critical power solutions, is tackling this challenge head-on. From high-voltage DC architectures and 97.5% efficient power shelves to integrated liquid cooling and vertically integrated “grid to chip” solutions, the company is reshaping how data centers operate in the AI era.
In this “5 Fast Facts on Compute Efficiency” conversation, I sat down with Chris Butler, President of Embedded and Critical Power at Flex, to explore how innovations in power, cooling, and manufacturing scale are unlocking new levels of efficiency, and how these breakthroughs could redefine sustainable AI infrastructure in the years ahead.
Q1: Chris, we’re seeing data center power requirements evolve dramatically with AI workloads pushing racks from 30kW to potentially 100kW+ and even toward 1MW configurations. You recently announced a power shelf system achieving 97.5% efficiency at half-load for NVIDIA GB300 NVL72 systems. How is Flex rethinking fundamental power architectures to not just handle these demands, but do so with maximum efficiency? What specific innovations in DC voltage levels and power conversion are proving most impactful?
A1: As AI workloads push data center rack densities higher, data center operators are fundamentally rethinking power architectures to meet energy consumption demands with maximum efficiency, scalability, and sustainability. A broader industry shift toward high-voltage DC architectures, particularly +/- 400 V DC and 800 V DC, has the potential to reduce conduction losses, enable longer cable runs, and minimize the conversion stages required to step power down from the grid, improving system efficiency and reducing thermal management overhead.
Flex collaborates with hyperscalers well in advance of new standards and product introductions to ensure their power architectures are innovation-ready — an example being our recently announced power shelf system that is optimized for NVIDIA GB300 NVL72 platforms. Achieving 97.5% efficiency at half-load, it leverages native 800 V DC input to streamline power conversion and reduce the need for intermediate AC stages. That improves energy efficiency while simplifying infrastructure design, allowing for denser deployments and faster scalability within the same data center footprint.
Q2: Flex has rapidly expanded manufacturing capacity by over 8 million square feet since fiscal 2024, including new facilities in Dallas and Columbia, South Carolina, focused on critical power product manufacturing and assembly, and Poland, which doubled your critical power product manufacturing capacity in Europe. You've stated that “rapid AI adoption across sectors is increasing data center operators’ need for reliable, efficient, and scalable power infrastructure solutions.” How does Flex’s manufacturing scale-up enable efficiency gains beyond just meeting demand? What efficiencies are you achieving in time to market and deployment that directly translate to operational efficiency for your customers?
A2: Proximity matters. With advanced manufacturing facilities in 30 countries, we enable customer regionalization strategies while providing the local expertise and global scale needed to drive competitive advantage. Customers benefit from capabilities and expertise that allow them to shorten the distance between manufacturing and deployment, speeding time to compute — a critical ROI metric for data center operators and investors — reducing their carbon footprint, and enhancing scalability within and between facilities. It also accelerates the delivery of services, from design and engineering support through deployment, installation, refurbishment, and recycling.
Depending on the engagement, operational efficiency may take the form of faster prototyping, reduced downtime, agile deployment, or post-sale value capture, among myriad other benefits. A mosaic of manufacturing facilities across geographies also enhances supply chain resilience, enabling customers to better navigate geopolitical uncertainties, shifting demand, labor shortages, and unforeseeable disruptions. Flex’s manufacturing capacity enables us to meet the insatiable demand for embedded and critical power solutions — not to mention cooling solutions and essential infrastructure such as racks and enclosures — while delivering tangible operational efficiency gains for data center customers worldwide.
Q3: Following Flex’s acquisition of JetCool Technologies, you now offer liquid-cooled racks supporting up to 120kW per rack with a clear upgrade path to 300kW, utilizing JetCool's microconvective cooling® technology. How does integrated liquid cooling improve thermal management and overall data center efficiency?
A3: With data center cooling needs surpassing what can be accomplished with traditional air-cooling systems, liquid cooling has become the go-to choice for dealing with the excessive heat produced by the power-hungry AI and HPC workloads in high-density compute environments. Leveraging our direct-to-chip cooling technology, data center customers enable zero water consumption, over 50 percent decrease in cooling power usage, and an 18 percent decrease in total power consumption, preventing an annual emission of 35 million metric tons of CO2 with widespread adoption.
While AI receives the lion’s share of attention, it’s important to remember that it still accounts for just 14 percent of global data center power usage — which means that the vast majority of data center space is dedicated to CPU-based workloads. To that end, we’re also at the forefront of developing innovative cooling solutions that deliver immediate performance and efficiency improvements without requiring any changes to data center infrastructure. For instance, the standalone JetCool SmartPlate™ System, designed to simplify the adoption of liquid cooling, eliminates the need for facility water — not a priority in air-cooled environments — while delivering an average total IT power savings of 15 percent, enabling customers to maximize compute in power-constrained environments.
Q4: Flex positions itself uniquely with solutions spanning “grid to chip” — from critical power infrastructure through embedded power modules. Your recent analysis suggests new configurations can improve system efficiency by about 20 percent annually per rack. As you look at the complete power delivery chain, where are the biggest efficiency gains being unlocked, and how does your vertical integration approach enable optimizations that wouldn’t be possible with point solutions?
A4: With 1+ MW racks on the horizon, data center operators are rethinking their architectures as power and thermal management requirements escalate. Today, power, cooling, and servers are often fully integrated within the same rack, an innovation that has served the industry well. However, not doing so can (perhaps paradoxically) ease space constraints while imparting a host of other benefits. While power and cooling may be disaggregated into separate “sidecar” racks to increase compute capacity in the IT rack, this still requires an integrated, seamless interplay of systems from grid to chip to extract maximum value.
For instance, a beefier 4,000-amp busbar feeding power into a reconfigured data hall with an end-of-row cooling distribution unit (CDU) can accommodate high-density IT racks and elevate the power architecture to 400V. Flanking the IT racks with standalone power cabinets and CDUs not only increases the amount of space in the IT rack dedicated to compute, it opens up the data hall floorspace considerably. Furthermore, the new configuration can improve system efficiency by about 20 percent — which translates into significant energy savings annually per rack. In large data centers with thousands of racks, the potential savings are clear. Data center operators are looking for partners that have the ability to design and manufacture complete solutions and deploy them at scale worldwide.
Q5: Looking ahead to the next three to five years, AI workloads will continue intensifying power and cooling demands while sustainability pressures mount. You’ve mentioned the potential for a 90 percent reduction in electrical room square footage by 2030, along with significant energy efficiency leaps. What efficiency breakthroughs do you see on the horizon that will be game-changers for sustainable AI infrastructure, and how is Flex positioned to lead in that transformation?
A5: Traditionally, converting incoming AC power to a DC voltage usable at the chip level requires several conversion steps, each of which impacts energy efficiency. But we’re seeing higher DC voltages emerge in the data center, including the 800 V DC that allows direct connection to renewable energy systems and +/- 400 V DC required for the integration of battery energy storage systems (BESS) and microgrid applications.
Condensing power conversion into a single solid-state transformer not only produces efficiency gains, it reduces the square footage required for electrical rooms significantly — by some estimates, up to 90 percent by 2030. This opens up new paths to profitability: saving on construction costs when capacity can be met with less space or increasing compute capacity in the existing envelop by adding more racks. We call this the convergence of power and IT, and it is a welcome step forward.