X

Arm's Data Center Advances: Chiplets, Efficiency & AI Integration

Data Center
Allyson Klein
November 4, 2024

I was lucky to catch up with Eddie Ramirez, VP of Marketing for Arm’s infrastructure business, at the recent OCP Summit. Eddie was last on the show at last OCP Summit talking about Arm’s focus on development of a data center ecosystem, and I was keen to learn about the progress the company had made in this arena. Arm’s advancements in data center technology are making a mark on innovative data center infrastructure with a focus on efficiency, chiplet innovation, scalable solution design.

During the recent OCP Summit 2024, Data Insights podcast co-host Jeneice Wnorowski of Solidigm and I had the pleasure of welcoming Eddie back to the TechArena to better understand the company’s impact across the industry. Arm’s big announcement this year at OCP Summit centered around the power of chiplets to accelerate silicon design. Chiplet technology enables multiple processing units to be combined in a single package, streamlining custom chip design. Arm’s Total Design program enables partners to adopt their cores efficiently, with configurations that cater to diverse needs, from general-purpose tasks to specialized AI processing. This modular integration approach enables flexibility, supporting efficient scaling for data centers that need adaptable configurations for different workloads.

Eight different partners within Arm’s Total Design program announced chiplet projects that they've kicked off, ranging from 16-core to 64-core setups that can be used in a variety of products. One partnership in particular brings together Samsung Foundry, a Korean ASIC design partner, ADTechnology, and Rebellions AI - a startup delivering TPU accelerators. Through this collaboration, Arm has demonstrated how its program helps deliver an integrated design that enables 3X greater performance efficiency than conventional GPU-based solutions – underscoring the power of best in breed chiplet solutions’ role in data center applications. When seeing where chiplet design is going with Arm, it comes as no surprise that this was a focus of OCP Summit, land of the hyperscalers. Arm cores have gained traction among the major players – AWS, Microsoft and Google – which all now integrate the technology in their home-grown designs – utilizing them for internal workloads as well as customer instances.

It's been in Arm’s DNA to provide compute efficient architectures. Their design delivers up to 60% higher power efficiency than x86 servers, allowing cloud providers to reduce power consumption and total cost of ownership (TCO) while achieving sustainability goals. This energy-saving approach is the key to Arm’s success with the hyperscalers, Eddie said, providing them a huge benefit and positioning Arm as an optimal choice for large-scale workloads.

With the rise of AI, the need for GPUs is amplified, especially to train large-scale models. However, CPUs remain essential, particularly for the inference stage, where AI models process data and provide real-time predictions. Unlike training, which demands high power, inference tasks can be handled efficiently by CPUs. Arm-based processors offer a cost-effective solution, balancing performance with reduced energy consumption.

Arm’s reach extends beyond computing into networking and storage within the data center. Arm cores are now embedded in top-of-rack switches, data processing units (DPUs), and baseboard management controllers (BMCs), enhancing efficiency in high-speed data transmission and storage. By deploying ARM cores across types of infrastructure, data centers achieve better resource management and power optimization, aligning with performance demands from AI workloads. This integrated approach allows data centers to streamline operations and enhance energy efficiency at every level.

Arm’s Neoverse platform - the company's infrastructure-focused product line – includes high-performance cores and interconnect IPs for data centers and edge environments. Neoverse’s adaptable architecture enables Arm partners to integrate the latest technology and expand it with additional I/O or storage features.

V3 of the Neoverse platform enhances Arm-based systems’ performance and flexibility, making them suitable for AI and data processing applications. This scalable approach enables data centers to meet growing performance needs without compromising power efficiency.

So what’s the TechArena take? I love chiplets and love what Arm is doing with an ecosystem. This design innovation makes sense for a wide array of use cases, and Arm’s foundation will help the industry move further, faster. Arm’s commitment to energy efficiency, modularity, and open collaboration also aligns well to Open Compute Project tenets, transforming data center infrastructure and offering true differentiation in a crowded field. Through programs like Total Design and platforms like Neoverse, Arm is responsibly building efficient and scalable solutions that meet the demands of AI, cloud, and edge applications.

There is a lot of disruption in the compute landscape with AI acceleration taking center stage. I see two paths of opportunity for Arm…one as a “head node” alternative to x86 with noted energy efficiency advantages, the other as a chiplet core with integration of TPU or other acceleration chiplets as alternative to GPU. Both are exciting to see gain traction in the market, and we’ll keep watching this space for more.

Listen to the full podcast here.

I was lucky to catch up with Eddie Ramirez, VP of Marketing for Arm’s infrastructure business, at the recent OCP Summit. Eddie was last on the show at last OCP Summit talking about Arm’s focus on development of a data center ecosystem, and I was keen to learn about the progress the company had made in this arena. Arm’s advancements in data center technology are making a mark on innovative data center infrastructure with a focus on efficiency, chiplet innovation, scalable solution design.

During the recent OCP Summit 2024, Data Insights podcast co-host Jeneice Wnorowski of Solidigm and I had the pleasure of welcoming Eddie back to the TechArena to better understand the company’s impact across the industry. Arm’s big announcement this year at OCP Summit centered around the power of chiplets to accelerate silicon design. Chiplet technology enables multiple processing units to be combined in a single package, streamlining custom chip design. Arm’s Total Design program enables partners to adopt their cores efficiently, with configurations that cater to diverse needs, from general-purpose tasks to specialized AI processing. This modular integration approach enables flexibility, supporting efficient scaling for data centers that need adaptable configurations for different workloads.

Eight different partners within Arm’s Total Design program announced chiplet projects that they've kicked off, ranging from 16-core to 64-core setups that can be used in a variety of products. One partnership in particular brings together Samsung Foundry, a Korean ASIC design partner, ADTechnology, and Rebellions AI - a startup delivering TPU accelerators. Through this collaboration, Arm has demonstrated how its program helps deliver an integrated design that enables 3X greater performance efficiency than conventional GPU-based solutions – underscoring the power of best in breed chiplet solutions’ role in data center applications. When seeing where chiplet design is going with Arm, it comes as no surprise that this was a focus of OCP Summit, land of the hyperscalers. Arm cores have gained traction among the major players – AWS, Microsoft and Google – which all now integrate the technology in their home-grown designs – utilizing them for internal workloads as well as customer instances.

It's been in Arm’s DNA to provide compute efficient architectures. Their design delivers up to 60% higher power efficiency than x86 servers, allowing cloud providers to reduce power consumption and total cost of ownership (TCO) while achieving sustainability goals. This energy-saving approach is the key to Arm’s success with the hyperscalers, Eddie said, providing them a huge benefit and positioning Arm as an optimal choice for large-scale workloads.

With the rise of AI, the need for GPUs is amplified, especially to train large-scale models. However, CPUs remain essential, particularly for the inference stage, where AI models process data and provide real-time predictions. Unlike training, which demands high power, inference tasks can be handled efficiently by CPUs. Arm-based processors offer a cost-effective solution, balancing performance with reduced energy consumption.

Arm’s reach extends beyond computing into networking and storage within the data center. Arm cores are now embedded in top-of-rack switches, data processing units (DPUs), and baseboard management controllers (BMCs), enhancing efficiency in high-speed data transmission and storage. By deploying ARM cores across types of infrastructure, data centers achieve better resource management and power optimization, aligning with performance demands from AI workloads. This integrated approach allows data centers to streamline operations and enhance energy efficiency at every level.

Arm’s Neoverse platform - the company's infrastructure-focused product line – includes high-performance cores and interconnect IPs for data centers and edge environments. Neoverse’s adaptable architecture enables Arm partners to integrate the latest technology and expand it with additional I/O or storage features.

V3 of the Neoverse platform enhances Arm-based systems’ performance and flexibility, making them suitable for AI and data processing applications. This scalable approach enables data centers to meet growing performance needs without compromising power efficiency.

So what’s the TechArena take? I love chiplets and love what Arm is doing with an ecosystem. This design innovation makes sense for a wide array of use cases, and Arm’s foundation will help the industry move further, faster. Arm’s commitment to energy efficiency, modularity, and open collaboration also aligns well to Open Compute Project tenets, transforming data center infrastructure and offering true differentiation in a crowded field. Through programs like Total Design and platforms like Neoverse, Arm is responsibly building efficient and scalable solutions that meet the demands of AI, cloud, and edge applications.

There is a lot of disruption in the compute landscape with AI acceleration taking center stage. I see two paths of opportunity for Arm…one as a “head node” alternative to x86 with noted energy efficiency advantages, the other as a chiplet core with integration of TPU or other acceleration chiplets as alternative to GPU. Both are exciting to see gain traction in the market, and we’ll keep watching this space for more.

Listen to the full podcast here.

Subscribe to TechArena

Subscribe