Unlock Critical Insights from Top Tech Practitioners >> DISCOVER OUR voiceS of innovation program
GET CRITICAL TECH INSIGHTS > DISCOVER OUR VOICES OF INNOVATION PROGRAM
X

Why Does AI Need a Wider Rack?

When I think back to the last OCP Global Summit 2025, one of the most memorable sights on the show floor wasn’t a chip or a server tray. It was the racks.

Meta’s Open Rack Wide (ORW) specification introduced adouble-width form factor that looked, at first glance, almost counterintuitive, especially in an industry moving toward disaggregation.

But ORW is a useful clue about where AI infrastructure actually is right now. We may be headed toward disaggregated systems, but today’s highest-performance AI deployments are still heavily constrained by short-reach, high-lane-count copper connections, plus the physical sprawl of power delivery, networking, and cooling that modern platforms demand. In other words, the rack is increasingly behaving less like furniture and more like the computer.

The Evolution of Open Rack

The Open Rack specification has been a cornerstone of hyperscale data center design for years. Unlike traditional 19-inch racks, Open Rack was designed from the ground up for large-scale cloud and AI deployments. Its signature21-inch width improves airflow and its powered busbar simplifies power delivery while reducing cable clutter.

Over time, Open Rack evolved to meet the growing demands of AI and high-performance computing. The original ORV1 specification introduced a 12V busbar, ORV2 improved scalability and cooling, and ORV3 moved to 48V—enabling higher power density and making liquid cooling easier to integrate (via rear-mounted manifolds). Then came ORV3 HPR (High Power Rack), which pushed further with added depth and more robust power management to support the most demanding AI servers while maintaining compatibility with the ORV3 standard.

For a while, ORV3 HPR seemed like the pinnacle of rack design. But as AI workloads continued to push the limits of power and cooling, even HPR began to show its constraints.

The Case for a Wider Rack

The industry is undeniably moving toward disaggregation—separating IT load, power, and cooling into distinct systems. Draft specifications and roadmaps for dissagregated power architectures targeting 100kW today and up to 1MW-class racks over time are already being shared through the OCP community, so a wider rack design might seem like a step backward. However, before we can fully embrace disaggregation at rack scale, we need to overcome the limitations of copper-based electrical connections. The sheer number of electrical and signaling leads—plus distance, loss, and power constraints—required to connect rack systems at scale presents significant challenges. Until those challenges are resolved, many AI deplyments favor a “scale-up” architecture over a “scale-out” approach.

There’s another factor at play: the physical layout of compute systems is expanding. As GPU die sizes grow, so do the memory, networking, and power delivery systems that support them. In short, while we know disaggregated systems are the future, we still need an intermediate solution to bridge the gap. That’s where Open Rack Wide (ORW) comes in.

ORW scales up the HPR’s feature set to accommodate much larger, heavier, and more power-intensive AI systems. With double the width of ORV3 racks and a slightly taller frame, ORW provides the space and structural integrity needed for next-generation AI platforms.

What Makes ORW Different?

ORW isn’t just a bigger rack—it’s a reimagined platform designed for the unique demands of AI. At 1200mm wide (compared to ORV3’s 600mm), ORW offers significantly more real estate for high-density compute trays, liquid cooling manifolds, and power distribution systems. It supports up to 3500 kg of IT gear—more than double the capacity of ORV3 HPR, and is engineered to handle the thermal and electrical loads of modern AI workloads. (Fun fact: ORW is also affectionately known as “BFR” — Big Freaking Racks.)

One of the most compelling aspects of ORW is its flexibility. The specification supports multiple power architecture options, including legacy ORV3 power shelves, side power racks for low- or high-voltage DC input, and even native high-voltage busbars that distribute power directly within the rack. This adaptability ensures that ORW can evolve alongside AI infrastructure, whether for training clusters, inference workloads, or hybrid deployments.

Liquid cooling is another key feature. ORW’s design accommodates high-power liquid-cooled busbars, which are essential for managing the heat generated on the busbar by the power delivery of today’s AI chips. This focus on cooling efficiency aligns with the industry’s push toward sustainable, high-performance data centers.

Industry Collaboration and Early Adoption

ORW isn’t just a Meta project—it’s an open standard developed in collaboration with industry leaders. The base specification for ORW was announced by Meta at the OCP Global Summit 2025, and it quickly gained traction. Companies like AMD, Wiwinn, and Rittal debuted their own ORW-based designs at the summit, showcasing the specification’s potential. AMD’s "Helios" rack-scale reference system, for example, leverages ORW to deliver optimized performance for AI clusters, while Wiwinn unveiled its double-wide rack architecture for next-generation AI workloads. Rittal, meanwhile, is preparing ORW-compatible enclosures and accessories for mass production later in 2026. This collective effort underscores the importance of open standards in shaping the future of AI infrastructure.

It’s worth noting that not everyone is on board. NVIDIA, for instance, is advancing  vertically integrated rack-scale systems and architectures that don’t necessarily map cleanly to ORW. But for those committed to open standards, ORW offers a compelling path forward. The AMD design exemplifies this as it integrates GPU, CPU and networking into a single, cohesive rack system for large-scale AI and High-Performance Computing (HPC) workloads.

Challenges on the Road Ahead

Developing ORW wasn’t without its challenges. The increased size and weight of the rack required new manufacturing approaches, including automation and bolt-together assembly techniques to simplify production and shipping. Testing presented another hurdle: traditional test equipment couldn’t handle ORW’s 3500 kg payload, forcing the team to partner with automotive and aerospace testing facilities to validate the design.

Standardization is also critical. For ORW to succeed, the OCP community must continue to refine the specification and ensure interoperability across vendors. This collaborative approach is what makes open standards like ORW so powerful—they bring together hyperscalers, vendors, and researchers to solve shared challenges.

The Future of AI Infrastructure

ORW represents a foundational shift in data center design. It addresses today’s power, cooling, and space constraints while laying the groundwork for future advancements. As the industry works toward full disaggregation, ORW provides a scalable, open platform that can evolve with the needs of AI workloads.

By providing a bridge to the future, ORW enables the industry to innovate today while preparing for the next wave of data center evolution.

Subscribe to our newsletter

When I think back to the last OCP Global Summit 2025, one of the most memorable sights on the show floor wasn’t a chip or a server tray. It was the racks.

Meta’s Open Rack Wide (ORW) specification introduced adouble-width form factor that looked, at first glance, almost counterintuitive, especially in an industry moving toward disaggregation.

But ORW is a useful clue about where AI infrastructure actually is right now. We may be headed toward disaggregated systems, but today’s highest-performance AI deployments are still heavily constrained by short-reach, high-lane-count copper connections, plus the physical sprawl of power delivery, networking, and cooling that modern platforms demand. In other words, the rack is increasingly behaving less like furniture and more like the computer.

The Evolution of Open Rack

The Open Rack specification has been a cornerstone of hyperscale data center design for years. Unlike traditional 19-inch racks, Open Rack was designed from the ground up for large-scale cloud and AI deployments. Its signature21-inch width improves airflow and its powered busbar simplifies power delivery while reducing cable clutter.

Over time, Open Rack evolved to meet the growing demands of AI and high-performance computing. The original ORV1 specification introduced a 12V busbar, ORV2 improved scalability and cooling, and ORV3 moved to 48V—enabling higher power density and making liquid cooling easier to integrate (via rear-mounted manifolds). Then came ORV3 HPR (High Power Rack), which pushed further with added depth and more robust power management to support the most demanding AI servers while maintaining compatibility with the ORV3 standard.

For a while, ORV3 HPR seemed like the pinnacle of rack design. But as AI workloads continued to push the limits of power and cooling, even HPR began to show its constraints.

The Case for a Wider Rack

The industry is undeniably moving toward disaggregation—separating IT load, power, and cooling into distinct systems. Draft specifications and roadmaps for dissagregated power architectures targeting 100kW today and up to 1MW-class racks over time are already being shared through the OCP community, so a wider rack design might seem like a step backward. However, before we can fully embrace disaggregation at rack scale, we need to overcome the limitations of copper-based electrical connections. The sheer number of electrical and signaling leads—plus distance, loss, and power constraints—required to connect rack systems at scale presents significant challenges. Until those challenges are resolved, many AI deplyments favor a “scale-up” architecture over a “scale-out” approach.

There’s another factor at play: the physical layout of compute systems is expanding. As GPU die sizes grow, so do the memory, networking, and power delivery systems that support them. In short, while we know disaggregated systems are the future, we still need an intermediate solution to bridge the gap. That’s where Open Rack Wide (ORW) comes in.

ORW scales up the HPR’s feature set to accommodate much larger, heavier, and more power-intensive AI systems. With double the width of ORV3 racks and a slightly taller frame, ORW provides the space and structural integrity needed for next-generation AI platforms.

What Makes ORW Different?

ORW isn’t just a bigger rack—it’s a reimagined platform designed for the unique demands of AI. At 1200mm wide (compared to ORV3’s 600mm), ORW offers significantly more real estate for high-density compute trays, liquid cooling manifolds, and power distribution systems. It supports up to 3500 kg of IT gear—more than double the capacity of ORV3 HPR, and is engineered to handle the thermal and electrical loads of modern AI workloads. (Fun fact: ORW is also affectionately known as “BFR” — Big Freaking Racks.)

One of the most compelling aspects of ORW is its flexibility. The specification supports multiple power architecture options, including legacy ORV3 power shelves, side power racks for low- or high-voltage DC input, and even native high-voltage busbars that distribute power directly within the rack. This adaptability ensures that ORW can evolve alongside AI infrastructure, whether for training clusters, inference workloads, or hybrid deployments.

Liquid cooling is another key feature. ORW’s design accommodates high-power liquid-cooled busbars, which are essential for managing the heat generated on the busbar by the power delivery of today’s AI chips. This focus on cooling efficiency aligns with the industry’s push toward sustainable, high-performance data centers.

Industry Collaboration and Early Adoption

ORW isn’t just a Meta project—it’s an open standard developed in collaboration with industry leaders. The base specification for ORW was announced by Meta at the OCP Global Summit 2025, and it quickly gained traction. Companies like AMD, Wiwinn, and Rittal debuted their own ORW-based designs at the summit, showcasing the specification’s potential. AMD’s "Helios" rack-scale reference system, for example, leverages ORW to deliver optimized performance for AI clusters, while Wiwinn unveiled its double-wide rack architecture for next-generation AI workloads. Rittal, meanwhile, is preparing ORW-compatible enclosures and accessories for mass production later in 2026. This collective effort underscores the importance of open standards in shaping the future of AI infrastructure.

It’s worth noting that not everyone is on board. NVIDIA, for instance, is advancing  vertically integrated rack-scale systems and architectures that don’t necessarily map cleanly to ORW. But for those committed to open standards, ORW offers a compelling path forward. The AMD design exemplifies this as it integrates GPU, CPU and networking into a single, cohesive rack system for large-scale AI and High-Performance Computing (HPC) workloads.

Challenges on the Road Ahead

Developing ORW wasn’t without its challenges. The increased size and weight of the rack required new manufacturing approaches, including automation and bolt-together assembly techniques to simplify production and shipping. Testing presented another hurdle: traditional test equipment couldn’t handle ORW’s 3500 kg payload, forcing the team to partner with automotive and aerospace testing facilities to validate the design.

Standardization is also critical. For ORW to succeed, the OCP community must continue to refine the specification and ensure interoperability across vendors. This collaborative approach is what makes open standards like ORW so powerful—they bring together hyperscalers, vendors, and researchers to solve shared challenges.

The Future of AI Infrastructure

ORW represents a foundational shift in data center design. It addresses today’s power, cooling, and space constraints while laying the groundwork for future advancements. As the industry works toward full disaggregation, ORW provides a scalable, open platform that can evolve with the needs of AI workloads.

By providing a bridge to the future, ORW enables the industry to innovate today while preparing for the next wave of data center evolution.

Subscribe to our newsletter

Transcript

Matty Bakkeren

Independent Growth Consultant & Founder

Subscribe to TechArena

Subscribe