
OCP 2025: Open is the Only Way We Scale the AI Data Center
Walking the halls of the Open Compute Project Foundation’s 2025 summit in San Jose this week, you can feel it: open collaboration has graduated from “nice-to-have” to the de facto playbook for AI-era buildouts.
With more than 11,000 people converging on OCP Global Summit 2025, the conversation has moved well past server SKUs and into rack- and data-center-level design—how we standardize power, cooling, networking, and fleet operations so the whole industry can build AI capacity at the pace of demand. As one keynote put it, we’re at the precipice of an intelligence revolution, but rockets don’t launch without a ground crew. And OCP community is Mission Control.
OCP’s news cadence underscores the shift from parts to platforms. The Foundation unveiled its “Open Data Center for AI” strategic push—a unifying umbrella to define the common physical and operational substrate of AI facilities so racks, pods, and clusters are fungible across operators and geographies. The explicit aim: speed time-to-deploy by reducing fragmentation in how we bring power and liquid cooling into the hall and how we certify the facility for high-density AI in the first place. Think of it as OCP moving from open boxes to an open blueprint for the whole building.
Education and talent are getting the same open treatment. OCP Academy is live, packaging community know-how into courses for everyone from newcomers to seasoned operators. For an industry racing to retrain on liquid loops, 800 VDC distribution, and AI-centric fleet ops, this is oxygen.
Data movement is foundational; networking is the lifeblood of AI clusters. OCP introduced ESUN (Ethernet for Scale-Up Networking), a new workstream backed by hyperscalers and silicon providers to advance Ethernet as a scale-up option (the single-hop, ultra-low-latency links inside accelerators and tightly coupled racks). It complements the existing SUE-Transport track and coordinates with UEC and IEEE—ideally reducing bespoke glue and improving multi-vendor interoperability. That said, we aren’t sold that Ethernet is the only solution; purpose-built scale-up fabrics and emerging interconnect approaches will continue to have a place depending on workload, topology, and time-to-value.
On the facilities side, OCP is stitching together the standards bodies that determine whether a design can leave the whiteboard. An alliance with OIX harmonizes OIX-2 with OCP Ready™ for metro-edge interconnection—useful for anyone pushing AI nearer to data sources and end users.
Governance matters, too. OCP added AMD, Arm, and NVIDIA to its Board—a signal that the silicon leaders want to shape, not just ship, the standards that will define AI factories. It’s hard to overstate how important that is as the ecosystem navigates CPU/XPU diversity, link-layer choices, and the migration to higher-voltage DC power.
From the keynotes and hallway conversations, a few themes stand out:
- Fungibility beats fragility. Hyperscalers described designing for “agility and fungibility” from chip to rack to region. That’s driving common mechanicals (e.g., Open Rack variants), standardized firmware/telemetry interfaces, and power architectures that can swing from air-augmented deployments to fully liquid-cooled pods without re-architecting the building. The OCP umbrella initiative is explicitly about codifying that common ground so more of us can buy, build, and operate at hyperscale tempo.
- Power and cooling graduate to first-class citizens. The big numbers are sobering: multi-MW racks, 15× power-density growth, and liquid-first designs. The industry is coalescing around higher-voltage distribution and rack-level energy buffering to tame synchronized AI load spikes. This is open standards work with direct capex/opex impact.
- Networking at scale—Ethernet plays a growing role, but not the only one. ESUN’s formation is a pragmatic move to tap Ethernet’s broad supply chain and toolchain while defining what scale-up links need (losslessness, error handling, deterministic latency). Paired with ongoing OCP work on universal link layers, it points toward more multi-vendor interoperability. That said, we don’t view Ethernet as the only destination—purpose-built fabrics and alternative transports will still make sense depending on workload, topology, and time-to-value.
The TechArena Take
OCP’s superpower has always been translating hyperscale breakthroughs into reusable playbooks. What’s different this year is the scope: open standards are now spanning the entire stack, from accelerator fabrics and firmware to 800 VDC buses, CDUs, and interconnection-ready metro-edge sites. Add formal education (OCP Academy) and cross-org alliances (OIX), and you get a faster flywheel: publish spec → validate in the open → train the market → scale across operators.
If you’re an enterprise, colo, or regional cloud eyeing AI expansion, your path to “AI-ready” increasingly starts with OCP checklists. If you’re a vendor, aligning roadmaps to ESUN, OCP Ready™ v2, and the Open Data Center for AI guidelines will shorten sales cycles because you’ll be speaking the same language as your customers’ facilities and networking teams.
The community is also growing up in governance. Seeing AMD, Arm, and NVIDIA take board seats alongside the traditional hyperscalar leadership matters; the next three years will be defined by choices about link layers, liquid classes, telemetry standards, and power domains. Having the architects at the table can keep us on a path where silicon diversity is a feature, not a headache.
At the scale AI now demands, you can’t buy your way out of physics, and you can’t vendor-lock your way to speed. OCP is where the industry is deciding, together, how we wire the next decade of compute. Open is no longer the alternative—it’s the plan.
Walking the halls of the Open Compute Project Foundation’s 2025 summit in San Jose this week, you can feel it: open collaboration has graduated from “nice-to-have” to the de facto playbook for AI-era buildouts.
With more than 11,000 people converging on OCP Global Summit 2025, the conversation has moved well past server SKUs and into rack- and data-center-level design—how we standardize power, cooling, networking, and fleet operations so the whole industry can build AI capacity at the pace of demand. As one keynote put it, we’re at the precipice of an intelligence revolution, but rockets don’t launch without a ground crew. And OCP community is Mission Control.
OCP’s news cadence underscores the shift from parts to platforms. The Foundation unveiled its “Open Data Center for AI” strategic push—a unifying umbrella to define the common physical and operational substrate of AI facilities so racks, pods, and clusters are fungible across operators and geographies. The explicit aim: speed time-to-deploy by reducing fragmentation in how we bring power and liquid cooling into the hall and how we certify the facility for high-density AI in the first place. Think of it as OCP moving from open boxes to an open blueprint for the whole building.
Education and talent are getting the same open treatment. OCP Academy is live, packaging community know-how into courses for everyone from newcomers to seasoned operators. For an industry racing to retrain on liquid loops, 800 VDC distribution, and AI-centric fleet ops, this is oxygen.
Data movement is foundational; networking is the lifeblood of AI clusters. OCP introduced ESUN (Ethernet for Scale-Up Networking), a new workstream backed by hyperscalers and silicon providers to advance Ethernet as a scale-up option (the single-hop, ultra-low-latency links inside accelerators and tightly coupled racks). It complements the existing SUE-Transport track and coordinates with UEC and IEEE—ideally reducing bespoke glue and improving multi-vendor interoperability. That said, we aren’t sold that Ethernet is the only solution; purpose-built scale-up fabrics and emerging interconnect approaches will continue to have a place depending on workload, topology, and time-to-value.
On the facilities side, OCP is stitching together the standards bodies that determine whether a design can leave the whiteboard. An alliance with OIX harmonizes OIX-2 with OCP Ready™ for metro-edge interconnection—useful for anyone pushing AI nearer to data sources and end users.
Governance matters, too. OCP added AMD, Arm, and NVIDIA to its Board—a signal that the silicon leaders want to shape, not just ship, the standards that will define AI factories. It’s hard to overstate how important that is as the ecosystem navigates CPU/XPU diversity, link-layer choices, and the migration to higher-voltage DC power.
From the keynotes and hallway conversations, a few themes stand out:
- Fungibility beats fragility. Hyperscalers described designing for “agility and fungibility” from chip to rack to region. That’s driving common mechanicals (e.g., Open Rack variants), standardized firmware/telemetry interfaces, and power architectures that can swing from air-augmented deployments to fully liquid-cooled pods without re-architecting the building. The OCP umbrella initiative is explicitly about codifying that common ground so more of us can buy, build, and operate at hyperscale tempo.
- Power and cooling graduate to first-class citizens. The big numbers are sobering: multi-MW racks, 15× power-density growth, and liquid-first designs. The industry is coalescing around higher-voltage distribution and rack-level energy buffering to tame synchronized AI load spikes. This is open standards work with direct capex/opex impact.
- Networking at scale—Ethernet plays a growing role, but not the only one. ESUN’s formation is a pragmatic move to tap Ethernet’s broad supply chain and toolchain while defining what scale-up links need (losslessness, error handling, deterministic latency). Paired with ongoing OCP work on universal link layers, it points toward more multi-vendor interoperability. That said, we don’t view Ethernet as the only destination—purpose-built fabrics and alternative transports will still make sense depending on workload, topology, and time-to-value.
The TechArena Take
OCP’s superpower has always been translating hyperscale breakthroughs into reusable playbooks. What’s different this year is the scope: open standards are now spanning the entire stack, from accelerator fabrics and firmware to 800 VDC buses, CDUs, and interconnection-ready metro-edge sites. Add formal education (OCP Academy) and cross-org alliances (OIX), and you get a faster flywheel: publish spec → validate in the open → train the market → scale across operators.
If you’re an enterprise, colo, or regional cloud eyeing AI expansion, your path to “AI-ready” increasingly starts with OCP checklists. If you’re a vendor, aligning roadmaps to ESUN, OCP Ready™ v2, and the Open Data Center for AI guidelines will shorten sales cycles because you’ll be speaking the same language as your customers’ facilities and networking teams.
The community is also growing up in governance. Seeing AMD, Arm, and NVIDIA take board seats alongside the traditional hyperscalar leadership matters; the next three years will be defined by choices about link layers, liquid classes, telemetry standards, and power domains. Having the architects at the table can keep us on a path where silicon diversity is a feature, not a headache.
At the scale AI now demands, you can’t buy your way out of physics, and you can’t vendor-lock your way to speed. OCP is where the industry is deciding, together, how we wire the next decade of compute. Open is no longer the alternative—it’s the plan.