
To keep up with AI’s relentless pace, companies are innovating on every aspect of technology inside the data center, from silicon, storage, network, and compute systems to the power and cooling technologies that support data center facilities. This innovation extends to system power delivery, where major efforts are being made to re-architect foundational power delivery technologies from the ground up.
Hyperscale operators are now planning gigawatt-class data centers designed to consume more than 1 billion watts of power. At the rack level, individual GPUs frequently draw more than 1,000 watts, compared to CPUs that average a third of that, and multi-GPU servers can easily top 8,000 watts per node. As a result, racks that once operated comfortably at 10–20 kilowatts are being pushed toward megawatt-class racks.
Legacy facilities, sized for lower-density server racks, struggle to manage the heat and stable power delivery for GPU-dense configurations that can surge from idle to peak in milliseconds. Distribution losses remain stubborn: roughly 10% of total data center power is lost in delivery and conversion. The net effect: power delivery is becoming a critical limiting factor to scaling AI compute.
Data Center operators know the struggle:
In short, today’s power delivery methods are colliding with tomorrow’s density—and that friction shows up as schedule risk, safety risk, and escalating OpEx. This isn’t a problem of inefficiency; it’s a problem of scalability. The rack-level power model built for CPU loads has collapsed under GPU surges.
This surge is stressing infrastructure in several ways:
Meeting AI demand isn’t about adding thicker cables or more copper layers in a printed circuit board—it requires a fundamentally new power delivery model. We need solutions that are lighter, denser, more efficient, and built for automation, so operators can deploy faster, pack more compute per unit footprint, and cut distribution losses.
CelLink’s solutions advance the Open Compute Project Foundation’s core tenets of Efficiency, Impact, Scalability, Openness, and Sustainability.
CelLink already has a track record of driving power delivery innovations in the automotive industry through the deployment of over three million flex harnesses into electric vehicles on the road, and we also have a presence in industrial, aerospace, and power storage applications. We’re now turning our attention to the largest power delivery challenge in the world today: Flex harness technology that redefines what’s possible in data center power delivery. Our solutions aim to address present-day core challenges of power delivery, while also unlocking robotic assembly for systems and removing manual wire terminations and the errors and delays that come with them.
Across the industry, designs are moving toward 1 megawatt IT racks supported by liquid cooling. CelLink’s flex harnesses, already capable of carrying 1 megawatt per harness in the automotive industry, align with that trajectory by freeing rear-of-rack space for cooling hardware, simplifying tight cable routing in high-density racks, and enabling automation-first builds that keep pace with liquid-cooled deployments. Our POC demonstration at the OCP Summit will showcase the integration of liquid cooling directly into power delivery.
Potential benefits for AI factory owners include:
With energy consumption projected to double by 2030, solving power delivery is not optional. CelLink’s innovation represents more than a clever engineering tweak; it signals a revolution in power delivery. By replacing bulky round wire cabling with a flat alternative, server manufacturers gain a clear path to building AI factories sustainably and at speed.
The industry has reimagined compute, networking, and cooling. Now it’s time to reimagine power delivery. Check out this tech brief for more details on CelLink’s solutions and connect with CelLink in the Innovation Village at the OCP Summit in San Jose, October 13-16, to learn more.