X

Arm Joins OCP Board, Contributes Chiplet Architecture Spec

October 14, 2025

Today, Arm took a trailblazing step toward its bold vision to innovate the future of converged AI data centers.

During the Open Compute Project (OCP) Foundation’s 2025 Global Summit, the OCP Foundation announced the company as the latest board of directors member, underscoring the relative importance of Arm processors to the future of hyperscale and AI data centers.

Arm also made a splash, announcing a new contribution to the OCP ecosystem: a vendor-neutral Foundation Chiplet System Architecture (FCSA), with plans to drive further innovation up the compute stack.

“Every AI data center today is chasing densification—packing as much compute as possible into every rack,” said Eddie Ramirez, vice president of go-to-market for infrastructure at Arm. “That means we’re talking about racks that consume the equivalent power of 100 homes. Efficiency isn’t just an advantage anymore; it’s a survival requirement.”

With its new FCSA spec, Arm is doubling down on openness, efficiency, and co-design across the semiconductor supply chain, driving toward establishing vendor-neutral chiplet interoperability. The company also unveiled new growth in its Arm Total Design ecosystem, which has tripled in size since its 2023 launch. Together, these moves underscore Arm's strategy to drive innovation as the data center industry shifts from commodity servers to “rack-level systems and large-scale clusters designed specifically for AI.”

“We’ve reached a point where we’re not just bringing our standards into OCP,” Ramirez added. “We’re bringing the entire Arm Total Design ecosystem.”

From Efficient Cores to Efficient Racks

Arm’s contribution to OCP comes at a pivotal moment. Across the industry, power-hungry AI workloads are reshaping data center design—from servers to racks to entire campuses. The latest AI systems push rack-level power densities to previously unthinkable levels, multiplying power consumption tenfold, and redefining the physics of deployment.

That message plays directly to Arm’s long-standing strength: energy-efficient computing. Arm’s low-power architecture has already enabled hyperscalers like AWS, Google, and Microsoft to deliver significant total cost of ownership (TCO) advantages in the cloud. Now, as AI demand accelerates, those same principles are being applied to massive, heterogeneous data center systems where every watt counts.

Redefining Integration: From Boards to Packages

While OCP has historically focused on modularity at the server level, Arm sees a shift happening inside processor composition itself. AI accelerators now combine compute, networking, and memory into highly integrated System-on-Chips (SoCs) composed of multiple chiplets—discrete dies that can be mixed and matched to optimize performance, cost, and power.

This evolution demands a new kind of open standard, Ramirez said.

“The integration point is moving from the server board to the silicon package,” he said. “AI SoCs now use chiplets for HBM memory, compute, IO, NPUs, and more—all married together. The Foundation Chiplet System Architecture enables that same modularity and interoperability at the silicon level.”

By contributing FCSA to OCP, Arm aims to enable companies to develop interoperable chiplets that can be reused across multiple products—expanding opportunities for smaller design houses and accelerating the overall pace of semiconductor innovation.

Arm Total Design: Scaling the Ecosystem

FCSA builds on momentum from Arm Total Design, a global collaboration launched two years ago to bring foundries, design houses, IP vendors, and manufacturers together to shorten design cycles and reduce development costs.

The ecosystem now includes 36 members, with 10 new additions debuting at OCP—among them Astera Labs, Credo, Eliyan, Marvell, Alchip, ASE, CoAsia, Insyde Software, Rebellions, and VIA NEXT. These companies span the AI SoC value chain, from IO chiplets and interconnect technology to advanced 3D packaging and die-to-die communication.

That infusion of collaboration is designed to create what Ramirez called a “mix-and-match” future where companies can specialize in a single chiplet and integrate it seamlessly into others’ designs through common frameworks and interfaces.

Driving Open, Sustainable AI Infrastructure

Beyond silicon innovation, Arm’s engagement with OCP reflects a broader sustainability and openness agenda. With AI driving exponential energy consumption, efficiency has become inseparable from environmental responsibility.

“The old approach of multiplying power per rack by ten is simply not sustainable,” Ramirez said. “We need to deliver performance gains through efficiency, not escalation.”

He emphasized that Arm’s OCP participation will prioritize vendor-neutral, open ecosystems and collaboration, including among traditional competitors from the x86 ecosystem. The company’s leadership role in OCP’s chiplet work group aims to expand participation from both large and small players, strengthening the global chiplet supply chain.

Looking Ahead: Beyond Training to Inference

While much of the AI infrastructure discussion centers on massive training clusters, Arm is also looking toward the next frontier: inference. Ramirez noted that future OCP efforts will increasingly focus on inference workloads closer to the edge, where latency, efficiency, and scalability drive different architectural requirements than the mega-racks built for model training.

These dual tracks—AI training and inference—mirror the broader compute evolution from cloud to edge, and from monolithic design to modular intelligence.

The TechArena Take

Even five years ago, thinking of Arm as an OCP board member was beyond believable. Arm’s ascendency as a critical player in hyperscale and AI infrastructure has been swift and impressive. The FCSA contribution to OCP could mark a pivotal shift in how the semiconductor industry collaborates moving forward, further underscoring Arm’s relative influence. By opening chiplet design to vendor-neutral standards, Arm is moving the ecosystem closer to the “plug-and-play” era of heterogeneous computing—one where silicon innovation can scale as fluidly as software.

As rack-level AI architectures push power and complexity to their limits, Arm’s strategy—anchored in efficiency, interoperability, and co-design—positions it at the heart of the industry’s most urgent transformation.

Check out Arm’s post for more information.

Today, Arm took a trailblazing step toward its bold vision to innovate the future of converged AI data centers.

During the Open Compute Project (OCP) Foundation’s 2025 Global Summit, the OCP Foundation announced the company as the latest board of directors member, underscoring the relative importance of Arm processors to the future of hyperscale and AI data centers.

Arm also made a splash, announcing a new contribution to the OCP ecosystem: a vendor-neutral Foundation Chiplet System Architecture (FCSA), with plans to drive further innovation up the compute stack.

“Every AI data center today is chasing densification—packing as much compute as possible into every rack,” said Eddie Ramirez, vice president of go-to-market for infrastructure at Arm. “That means we’re talking about racks that consume the equivalent power of 100 homes. Efficiency isn’t just an advantage anymore; it’s a survival requirement.”

With its new FCSA spec, Arm is doubling down on openness, efficiency, and co-design across the semiconductor supply chain, driving toward establishing vendor-neutral chiplet interoperability. The company also unveiled new growth in its Arm Total Design ecosystem, which has tripled in size since its 2023 launch. Together, these moves underscore Arm's strategy to drive innovation as the data center industry shifts from commodity servers to “rack-level systems and large-scale clusters designed specifically for AI.”

“We’ve reached a point where we’re not just bringing our standards into OCP,” Ramirez added. “We’re bringing the entire Arm Total Design ecosystem.”

From Efficient Cores to Efficient Racks

Arm’s contribution to OCP comes at a pivotal moment. Across the industry, power-hungry AI workloads are reshaping data center design—from servers to racks to entire campuses. The latest AI systems push rack-level power densities to previously unthinkable levels, multiplying power consumption tenfold, and redefining the physics of deployment.

That message plays directly to Arm’s long-standing strength: energy-efficient computing. Arm’s low-power architecture has already enabled hyperscalers like AWS, Google, and Microsoft to deliver significant total cost of ownership (TCO) advantages in the cloud. Now, as AI demand accelerates, those same principles are being applied to massive, heterogeneous data center systems where every watt counts.

Redefining Integration: From Boards to Packages

While OCP has historically focused on modularity at the server level, Arm sees a shift happening inside processor composition itself. AI accelerators now combine compute, networking, and memory into highly integrated System-on-Chips (SoCs) composed of multiple chiplets—discrete dies that can be mixed and matched to optimize performance, cost, and power.

This evolution demands a new kind of open standard, Ramirez said.

“The integration point is moving from the server board to the silicon package,” he said. “AI SoCs now use chiplets for HBM memory, compute, IO, NPUs, and more—all married together. The Foundation Chiplet System Architecture enables that same modularity and interoperability at the silicon level.”

By contributing FCSA to OCP, Arm aims to enable companies to develop interoperable chiplets that can be reused across multiple products—expanding opportunities for smaller design houses and accelerating the overall pace of semiconductor innovation.

Arm Total Design: Scaling the Ecosystem

FCSA builds on momentum from Arm Total Design, a global collaboration launched two years ago to bring foundries, design houses, IP vendors, and manufacturers together to shorten design cycles and reduce development costs.

The ecosystem now includes 36 members, with 10 new additions debuting at OCP—among them Astera Labs, Credo, Eliyan, Marvell, Alchip, ASE, CoAsia, Insyde Software, Rebellions, and VIA NEXT. These companies span the AI SoC value chain, from IO chiplets and interconnect technology to advanced 3D packaging and die-to-die communication.

That infusion of collaboration is designed to create what Ramirez called a “mix-and-match” future where companies can specialize in a single chiplet and integrate it seamlessly into others’ designs through common frameworks and interfaces.

Driving Open, Sustainable AI Infrastructure

Beyond silicon innovation, Arm’s engagement with OCP reflects a broader sustainability and openness agenda. With AI driving exponential energy consumption, efficiency has become inseparable from environmental responsibility.

“The old approach of multiplying power per rack by ten is simply not sustainable,” Ramirez said. “We need to deliver performance gains through efficiency, not escalation.”

He emphasized that Arm’s OCP participation will prioritize vendor-neutral, open ecosystems and collaboration, including among traditional competitors from the x86 ecosystem. The company’s leadership role in OCP’s chiplet work group aims to expand participation from both large and small players, strengthening the global chiplet supply chain.

Looking Ahead: Beyond Training to Inference

While much of the AI infrastructure discussion centers on massive training clusters, Arm is also looking toward the next frontier: inference. Ramirez noted that future OCP efforts will increasingly focus on inference workloads closer to the edge, where latency, efficiency, and scalability drive different architectural requirements than the mega-racks built for model training.

These dual tracks—AI training and inference—mirror the broader compute evolution from cloud to edge, and from monolithic design to modular intelligence.

The TechArena Take

Even five years ago, thinking of Arm as an OCP board member was beyond believable. Arm’s ascendency as a critical player in hyperscale and AI infrastructure has been swift and impressive. The FCSA contribution to OCP could mark a pivotal shift in how the semiconductor industry collaborates moving forward, further underscoring Arm’s relative influence. By opening chiplet design to vendor-neutral standards, Arm is moving the ecosystem closer to the “plug-and-play” era of heterogeneous computing—one where silicon innovation can scale as fluidly as software.

As rack-level AI architectures push power and complexity to their limits, Arm’s strategy—anchored in efficiency, interoperability, and co-design—positions it at the heart of the industry’s most urgent transformation.

Check out Arm’s post for more information.

Transcript

Subscribe to TechArena

Subscribe