X

AMD’s 38x Efficiency Leap Sets Stage for Ambitious 2030 Goal

September 23, 2025

As the demand for AI scales and the energy footprint of data centers comes under sharper scrutiny, AMD is pushing the boundaries of what’s possible in efficiency. The company surpassed its ambitious energy-efficiency goal ahead of schedule, with a 38x improvement, and is now setting its sights even higher: a 20x rack-scale efficiency target by 2030.

In this Five Fast Facts Q&A, I sat down Justin Murrill, senior director of corporate responsibility at AMD, to explore what these milestones mean in practice, from slashing carbon emissions for AI training to reimagining rack-level design—and how innovation in hardware, software, and ecosystem collaboration will be key to building a more sustainable future for compute.

Q1: AMD just announced surpassing your ambitious 30x25 goal ahead of schedule, achieving a remarkable 38x improvement in energy efficiency. Can you walk us through what this achievement means in practical terms for data centers and AI workloads, and how this 97% reduction in energy consumption compares to what the industry was achieving just five years ago?

When we set our 30x25 goal, we wanted to ensure it was rooted in a clear benchmark and represented real-world energy use.[i] We worked closely with renowned compute energy efficiency researcher and author, Dr. Jonathan Koomey, to develop a goal methodology that includes segment-specific data center power utilization effectiveness (PUE) and typical energy consumption for accelerated computing used in HPC and AI-training workloads.  

The practical implication is that data centers utilizing AMD CPUs and GPUs can achieve the same computing performance with 97% less energy when compared to systems from just five years ago. This represents more than a 2.5x acceleration over industry trends from the previous five years (2015-2020). We achieved this through deep architectural innovations, aggressive performance-per-watt gains across our data center GPU and CPU products, and software optimizations.

Our teams are accelerating innovation to improve energy efficiency, which will continue to have ripple effects. On the software side, we can continue to drive enhancements well after products ship. As exciting as it was to beat our goal, we are looking forward to the advances we will continue to make.

Q2: With AI scaling rapidly and energy demands growing exponentially, you’ve set another bold target: 20x rack-scale efficiency improvement for AI training and inference from 2024 to 2030. This shifts from node-level to system-level optimization. What drove this strategic pivot to rack-scale metrics, and how does this new goal address the reality that a typical AI model requiring 275+ racks today could theoretically run on less than one rack by 2030?

As workloads scale and demand continues to rise, node-level efficiency gains won't keep pace. The progression of our product goals to rack-level reflects an expanding ambition and business strategy to optimize a broader portion of the ecosystem. This is reflected in our journey to building a best-in-class portfolio to address the rapidly evolving AI market. Over the last few years, we have made several strategic acquisitions to expand our AI software, hardware and systems capabilities, including scaling to full rack-level system design with the acquisition of ZT Systems.

Our new rack-scale goal outpaces the historical industry improvement trend (2018 to 2025) by nearly 3x. To demonstrate the real-world implications of our goal, we used a typical AI model in 2025 as a benchmark, which today requires more than 275 racks for training. With the energy efficiency gains we plan to make, we believe we could accomplish this training with less than one fully utilized rack in 2030.[ii] This rack consolidation could enable more than a 95% reduction in operational electricity use and a 97% reduction in carbon emissions.  

Q3: The environmental implications are staggering – potentially reducing carbon emissions from 3,000 to just 100 metric tons CO2 for training a typical AI model. Given the industry’s growing focus on sustainable AI, how does AMD’s approach to energy-efficient design integrate with broader sustainability goals, and what role do you see these efficiency gains playing in making AI more environmentally responsible at scale?

Our environmental sustainability goals span our operations, supply chain and products, and are integrated into how we conduct business responsibly.  Increasing the computing performance delivered per watt of energy consumed is a vital aspect of our corporate strategy, our climate strategy, and our ethos to tackle some of the world’s most important challenges. Global electricity consumption trends show a collective trajectory to consume more energy than the market can support within the next two decades.[iii] Further, many of our customers have energy efficiency and GHG emissions goals of their own. Therefore, the need for innovative energy and computing solutions is becoming increasingly important – perhaps nowhere more so than in the data center.  

We also see opportunities for AI to advance overall data center sustainability. For example, AI-driven power management can identify inefficiencies, like underused or overtaxed equipment, and automatically adjust the allocation of resources for optimal power consumption. AMD GPUs, CPUs, adaptive computing, networking and software are designed to all work together seamlessly to help optimize data center energy management systems by adjusting workloads and system configurations.

Beyond the data center, AMD today is the only provider delivering end-to-end AI solutions. Being able to deliver AI compute locally on a device – whether a PC or a processor embedded at the edge – can help reduce the power burden on data centers.  

Q4: Your 20x goal represents what AMD can control directly through hardware and system design, but you've indicated that combined with software and algorithmic advances, we could see up to 100x overall efficiency improvements. How is AMD enabling and collaborating with the broader ecosystem to unlock these additional gains beyond your direct hardware contributions?

We know that on top of the hardware and system-level improvements, even greater AI model efficiency gains will be possible through software optimizations. We estimate these additional gains could be up to 5x over the goal period as software developers discover smarter algorithms and continue innovating with lower-precision approaches at current rates. [iv]

This is why we believe the open ecosystem is so important for AI innovation. By harnessing the intelligence of the broader developer community, we can accelerate energy efficiency improvements. While AMD is not claiming that full multiplier in our own goal, we are proud to provide the hardware foundation that enables it and to support the open ecosystem and developer community working to unlock those gains.  

Whether through open standards, our open software approach with AMD ROCm™, or our close collaboration with our partners, AMD remains committed to helping innovators everywhere scale AI more efficiently.

Q5: AMD has consistently exceeded its public efficiency goals over the past decade. As you embark on this new 2030 target that aims to exceed industry improvement trends by almost 3x, what gives you confidence this ambitious goal is achievable, and how will you maintain transparency and accountability as the industry watches the company’s progress toward this next milestone?

At AMD, we have repeatedly demonstrated the ability to lay out a vision for computing energy efficiency, project the pathway of innovations, and execute on our roadmap. We have significantly expanded our engineering talent pool with the best and brightest minds to deliver some of the world’s most advanced chips, software, and enterprise AI solutions. We’ve also steadily increased our investment in research and development to drive ongoing innovation in compute performance and efficiency. These strategies, along with comprehensive design solutions, will support exponential growth of both improved performance and increased energy efficiency.  

Fundamentally, our culture at AMD thrives on setting big goals that address important challenges and require new ways of thinking. We will continue to transparently report annually on our progress toward our goals and work with third-parties on measurement and verification. You can read more about our recent progress in our 30th annual Corporate Responsibility Report.

[i] Includes high-performance CPU and GPU accelerators used for AI training and High-Performance Computing in a 4-Accelerator, CPU hosted configuration. Goal calculations are based on performance scores as measured by standard performance metrics (HPC: Linpack DGEMM kernel FLOPS with 4k matrix size; AI training: lower precision training-focused floating-point math GEMM kernels operating on 4k matrices) divided by the rated power consumption of a representative accelerated compute node including the CPU host + memory, and 4 GPU accelerators.

[ii] AMD estimated the number of racks to train a typical notable AI model based on EPOCH AI data (https://epoch.ai). For this calculation we assume, based on these data, that a typical model takes 1025 floating point operations to train (based on the median of 2025 data), and that this training takes place over 1 month.  FLOPs needed = 10^25 FLOPs/(seconds/month)/Model FLOPs utilization (MFU) = 10^25/(2.6298*10^6)/0.6.   Racks = FLOPs needed/(FLOPS/rack in 2024 and 2030). The compute performance estimates from the AMD roadmap suggests that approximately 276 racks would be needed in 2025 to train a typical model over one month using the MI300X product (assuming 22.656 PFLOPS/rack with 60% MFU) and <1 fully utilized rack would be needed to train the same model in 2030 using a rack configuration based on an AMD roadmap projection. These calculations imply a >276-fold reduction in the number of racks to train the same model over this six-year period. Electricity use for a MI300X system to completely train a def

[iii] “The Decadal Plan for Semiconductors,” Semiconductor Research Corporation, https://www.src.org/about/decadal-plan/ (accessed May 23, 2024).

[iv] Regression analysis of achieved accuracy/parameter across a selection of model benchmarks, such as MMLU, HellaSwag, and ARC Challenge, show that improving efficiency of ML model architectures through novel algorithmic techniques, such as Mixture of Experts and State Space Models for example, can improve their efficiency by roughly 5x during the goal period. Similar numbers are quoted in Patterson, D., J. Gonzalez, U. Hölzle, Q. Le, C. Liang, L. M. Munguia, D. Rothchild, D. R. So, M. Texier, and J. Dean. 2022. "The Carbon Footprint of Machine Learning Training Will Plateau, Then Shrink." Computer. vol. 55, no. 7. pp. 18-28.”

Therefore, assuming innovation continues at the current pace, a 20x hardware and system design goal amplified by a 5x software and algorithm advancements can lead to a 100x total gain by 2030.

As the demand for AI scales and the energy footprint of data centers comes under sharper scrutiny, AMD is pushing the boundaries of what’s possible in efficiency. The company surpassed its ambitious energy-efficiency goal ahead of schedule, with a 38x improvement, and is now setting its sights even higher: a 20x rack-scale efficiency target by 2030.

In this Five Fast Facts Q&A, I sat down Justin Murrill, senior director of corporate responsibility at AMD, to explore what these milestones mean in practice, from slashing carbon emissions for AI training to reimagining rack-level design—and how innovation in hardware, software, and ecosystem collaboration will be key to building a more sustainable future for compute.

Q1: AMD just announced surpassing your ambitious 30x25 goal ahead of schedule, achieving a remarkable 38x improvement in energy efficiency. Can you walk us through what this achievement means in practical terms for data centers and AI workloads, and how this 97% reduction in energy consumption compares to what the industry was achieving just five years ago?

When we set our 30x25 goal, we wanted to ensure it was rooted in a clear benchmark and represented real-world energy use.[i] We worked closely with renowned compute energy efficiency researcher and author, Dr. Jonathan Koomey, to develop a goal methodology that includes segment-specific data center power utilization effectiveness (PUE) and typical energy consumption for accelerated computing used in HPC and AI-training workloads.  

The practical implication is that data centers utilizing AMD CPUs and GPUs can achieve the same computing performance with 97% less energy when compared to systems from just five years ago. This represents more than a 2.5x acceleration over industry trends from the previous five years (2015-2020). We achieved this through deep architectural innovations, aggressive performance-per-watt gains across our data center GPU and CPU products, and software optimizations.

Our teams are accelerating innovation to improve energy efficiency, which will continue to have ripple effects. On the software side, we can continue to drive enhancements well after products ship. As exciting as it was to beat our goal, we are looking forward to the advances we will continue to make.

Q2: With AI scaling rapidly and energy demands growing exponentially, you’ve set another bold target: 20x rack-scale efficiency improvement for AI training and inference from 2024 to 2030. This shifts from node-level to system-level optimization. What drove this strategic pivot to rack-scale metrics, and how does this new goal address the reality that a typical AI model requiring 275+ racks today could theoretically run on less than one rack by 2030?

As workloads scale and demand continues to rise, node-level efficiency gains won't keep pace. The progression of our product goals to rack-level reflects an expanding ambition and business strategy to optimize a broader portion of the ecosystem. This is reflected in our journey to building a best-in-class portfolio to address the rapidly evolving AI market. Over the last few years, we have made several strategic acquisitions to expand our AI software, hardware and systems capabilities, including scaling to full rack-level system design with the acquisition of ZT Systems.

Our new rack-scale goal outpaces the historical industry improvement trend (2018 to 2025) by nearly 3x. To demonstrate the real-world implications of our goal, we used a typical AI model in 2025 as a benchmark, which today requires more than 275 racks for training. With the energy efficiency gains we plan to make, we believe we could accomplish this training with less than one fully utilized rack in 2030.[ii] This rack consolidation could enable more than a 95% reduction in operational electricity use and a 97% reduction in carbon emissions.  

Q3: The environmental implications are staggering – potentially reducing carbon emissions from 3,000 to just 100 metric tons CO2 for training a typical AI model. Given the industry’s growing focus on sustainable AI, how does AMD’s approach to energy-efficient design integrate with broader sustainability goals, and what role do you see these efficiency gains playing in making AI more environmentally responsible at scale?

Our environmental sustainability goals span our operations, supply chain and products, and are integrated into how we conduct business responsibly.  Increasing the computing performance delivered per watt of energy consumed is a vital aspect of our corporate strategy, our climate strategy, and our ethos to tackle some of the world’s most important challenges. Global electricity consumption trends show a collective trajectory to consume more energy than the market can support within the next two decades.[iii] Further, many of our customers have energy efficiency and GHG emissions goals of their own. Therefore, the need for innovative energy and computing solutions is becoming increasingly important – perhaps nowhere more so than in the data center.  

We also see opportunities for AI to advance overall data center sustainability. For example, AI-driven power management can identify inefficiencies, like underused or overtaxed equipment, and automatically adjust the allocation of resources for optimal power consumption. AMD GPUs, CPUs, adaptive computing, networking and software are designed to all work together seamlessly to help optimize data center energy management systems by adjusting workloads and system configurations.

Beyond the data center, AMD today is the only provider delivering end-to-end AI solutions. Being able to deliver AI compute locally on a device – whether a PC or a processor embedded at the edge – can help reduce the power burden on data centers.  

Q4: Your 20x goal represents what AMD can control directly through hardware and system design, but you've indicated that combined with software and algorithmic advances, we could see up to 100x overall efficiency improvements. How is AMD enabling and collaborating with the broader ecosystem to unlock these additional gains beyond your direct hardware contributions?

We know that on top of the hardware and system-level improvements, even greater AI model efficiency gains will be possible through software optimizations. We estimate these additional gains could be up to 5x over the goal period as software developers discover smarter algorithms and continue innovating with lower-precision approaches at current rates. [iv]

This is why we believe the open ecosystem is so important for AI innovation. By harnessing the intelligence of the broader developer community, we can accelerate energy efficiency improvements. While AMD is not claiming that full multiplier in our own goal, we are proud to provide the hardware foundation that enables it and to support the open ecosystem and developer community working to unlock those gains.  

Whether through open standards, our open software approach with AMD ROCm™, or our close collaboration with our partners, AMD remains committed to helping innovators everywhere scale AI more efficiently.

Q5: AMD has consistently exceeded its public efficiency goals over the past decade. As you embark on this new 2030 target that aims to exceed industry improvement trends by almost 3x, what gives you confidence this ambitious goal is achievable, and how will you maintain transparency and accountability as the industry watches the company’s progress toward this next milestone?

At AMD, we have repeatedly demonstrated the ability to lay out a vision for computing energy efficiency, project the pathway of innovations, and execute on our roadmap. We have significantly expanded our engineering talent pool with the best and brightest minds to deliver some of the world’s most advanced chips, software, and enterprise AI solutions. We’ve also steadily increased our investment in research and development to drive ongoing innovation in compute performance and efficiency. These strategies, along with comprehensive design solutions, will support exponential growth of both improved performance and increased energy efficiency.  

Fundamentally, our culture at AMD thrives on setting big goals that address important challenges and require new ways of thinking. We will continue to transparently report annually on our progress toward our goals and work with third-parties on measurement and verification. You can read more about our recent progress in our 30th annual Corporate Responsibility Report.

[i] Includes high-performance CPU and GPU accelerators used for AI training and High-Performance Computing in a 4-Accelerator, CPU hosted configuration. Goal calculations are based on performance scores as measured by standard performance metrics (HPC: Linpack DGEMM kernel FLOPS with 4k matrix size; AI training: lower precision training-focused floating-point math GEMM kernels operating on 4k matrices) divided by the rated power consumption of a representative accelerated compute node including the CPU host + memory, and 4 GPU accelerators.

[ii] AMD estimated the number of racks to train a typical notable AI model based on EPOCH AI data (https://epoch.ai). For this calculation we assume, based on these data, that a typical model takes 1025 floating point operations to train (based on the median of 2025 data), and that this training takes place over 1 month.  FLOPs needed = 10^25 FLOPs/(seconds/month)/Model FLOPs utilization (MFU) = 10^25/(2.6298*10^6)/0.6.   Racks = FLOPs needed/(FLOPS/rack in 2024 and 2030). The compute performance estimates from the AMD roadmap suggests that approximately 276 racks would be needed in 2025 to train a typical model over one month using the MI300X product (assuming 22.656 PFLOPS/rack with 60% MFU) and <1 fully utilized rack would be needed to train the same model in 2030 using a rack configuration based on an AMD roadmap projection. These calculations imply a >276-fold reduction in the number of racks to train the same model over this six-year period. Electricity use for a MI300X system to completely train a def

[iii] “The Decadal Plan for Semiconductors,” Semiconductor Research Corporation, https://www.src.org/about/decadal-plan/ (accessed May 23, 2024).

[iv] Regression analysis of achieved accuracy/parameter across a selection of model benchmarks, such as MMLU, HellaSwag, and ARC Challenge, show that improving efficiency of ML model architectures through novel algorithmic techniques, such as Mixture of Experts and State Space Models for example, can improve their efficiency by roughly 5x during the goal period. Similar numbers are quoted in Patterson, D., J. Gonzalez, U. Hölzle, Q. Le, C. Liang, L. M. Munguia, D. Rothchild, D. R. So, M. Texier, and J. Dean. 2022. "The Carbon Footprint of Machine Learning Training Will Plateau, Then Shrink." Computer. vol. 55, no. 7. pp. 18-28.”

Therefore, assuming innovation continues at the current pace, a 20x hardware and system design goal amplified by a 5x software and algorithm advancements can lead to a 100x total gain by 2030.

Transcript

Subscribe to TechArena

Subscribe