CelLink Flex Harnesses Enable the AI Power Revolution
To keep up with AI’s relentless pace, companies are innovating on every aspect of technology inside the data center, from silicon, storage, network, and compute systems to the power and cooling technologies that support data center facilities. This innovation extends to system power delivery, where major efforts are being made to re-architect foundational power delivery technologies from the ground up.
The Problem: Power Constraints in the AI Era
Hyperscale operators are now planning gigawatt-class data centers designed to consume more than 1 billion watts of power. At the rack level, individual GPUs frequently draw more than 1,000 watts, compared to CPUs that average a third of that, and multi-GPU servers can easily top 8,000 watts per node. As a result, racks that once operated comfortably at 10–20 kilowatts are being pushed toward megawatt-class racks.
Legacy facilities, sized for lower-density server racks, struggle to manage the heat and stable power delivery for GPU-dense configurations that can surge from idle to peak in milliseconds. Distribution losses remain stubborn: roughly 10% of total data center power is lost in delivery and conversion. The net effect: power delivery is becoming a critical limiting factor to scaling AI compute.
The AI Factory Owner’s Pain Points
Data Center operators know the struggle:
- Cable trays hit limits fast. Bulky copper cables consume space and block airflow.
- Rear-of-rack congestion slows liquid-cooling retrofits. Bend radius and routing constraints make every change a headache.
- Commissioning drags. Crews battle heavy conductors, torque bulky terminations, and chase faults in overcrowded terminations.
- Losses remain stubborn. Roughly 10% of data center power vanishes in distribution and conversion.
- Safety and sustainability risks escalate. Every kilogram of copper is embodied carbon today and represents potential e-waste tomorrow.
In short, today’s power delivery methods are colliding with tomorrow’s density—and that friction shows up as schedule risk, safety risk, and escalating OpEx. This isn’t a problem of inefficiency; it’s a problem of scalability. The rack-level power model built for CPU loads has collapsed under GPU surges.
This surge is stressing infrastructure in several ways:
- Distribution loss: Every extra power conversion stage bleeds watts; long copper runs add resistive power loss, forcing overprovisioning and increasing cooling requirements.
- Physical bulk: Heavy gauge round wire conductors are hard to route, consume white space, constrict airflow, and cap rack density.
- Installation risk: Bulky terminations increase touchpoints and invite human error in assembly.
Rethinking Power Delivery in Alignment with OCP’s Core Tenets
Meeting AI demand isn’t about adding thicker cables or more copper layers in a printed circuit board—it requires a fundamentally new power delivery model. We need solutions that are lighter, denser, more efficient, and built for automation, so operators can deploy faster, pack more compute per unit footprint, and cut distribution losses.
CelLink’s solutions advance the Open Compute Project Foundation’s core tenets of Efficiency, Impact, Scalability, Openness, and Sustainability.
CelLink’s Breakthrough Solution
CelLink already has a track record of driving power delivery innovations in the automotive industry through the deployment of over three million flex harnesses into electric vehicles on the road, and we also have a presence in industrial, aerospace, and power storage applications. We’re now turning our attention to the largest power delivery challenge in the world today: Flex harness technology that redefines what’s possible in data center power delivery. Our solutions aim to address present-day core challenges of power delivery, while also unlocking robotic assembly for systems and removing manual wire terminations and the errors and delays that come with them.
Across the industry, designs are moving toward 1 megawatt IT racks supported by liquid cooling. CelLink’s flex harnesses, already capable of carrying 1 megawatt per harness in the automotive industry, align with that trajectory by freeing rear-of-rack space for cooling hardware, simplifying tight cable routing in high-density racks, and enabling automation-first builds that keep pace with liquid-cooled deployments. Our POC demonstration at the OCP Summit will showcase the integration of liquid cooling directly into power delivery.
Potential benefits for AI factory owners include:
- Greater Compute Density: Thinner, patterned-to-shape cabling means more room for compute hardware per rack.
- Lower material use: Less conductor mass translates to a reduced environmental footprint.
- Compatible with automation: Robotic assembly and a software-defined, reproducible conductor footprint streamlines buildouts, eliminates manual error, and improves field reliability.
- Faster construction: Lighter, patterned-to-shape harnesses and robotic terminations compress install, QA, and rework time cycles.
- Efficiency gains: Backside liquid cooling of vertical power VRMs keeps VRMs operating at high efficiency, reducing waste heat by tens to hundreds of watts per server.
- Future-proofs infrastructure: Positions operators to handle the surge toward gigawatt campuses and megawatt-class racks without being choked by legacy cabling constraints.
What’s Next
With energy consumption projected to double by 2030, solving power delivery is not optional. CelLink’s innovation represents more than a clever engineering tweak; it signals a revolution in power delivery. By replacing bulky round wire cabling with a flat alternative, server manufacturers gain a clear path to building AI factories sustainably and at speed.
The industry has reimagined compute, networking, and cooling. Now it’s time to reimagine power delivery. Check out this tech brief for more details on CelLink’s solutions and connect with CelLink in the Innovation Village at the OCP Summit in San Jose, October 13-16, to learn more.
To keep up with AI’s relentless pace, companies are innovating on every aspect of technology inside the data center, from silicon, storage, network, and compute systems to the power and cooling technologies that support data center facilities. This innovation extends to system power delivery, where major efforts are being made to re-architect foundational power delivery technologies from the ground up.
The Problem: Power Constraints in the AI Era
Hyperscale operators are now planning gigawatt-class data centers designed to consume more than 1 billion watts of power. At the rack level, individual GPUs frequently draw more than 1,000 watts, compared to CPUs that average a third of that, and multi-GPU servers can easily top 8,000 watts per node. As a result, racks that once operated comfortably at 10–20 kilowatts are being pushed toward megawatt-class racks.
Legacy facilities, sized for lower-density server racks, struggle to manage the heat and stable power delivery for GPU-dense configurations that can surge from idle to peak in milliseconds. Distribution losses remain stubborn: roughly 10% of total data center power is lost in delivery and conversion. The net effect: power delivery is becoming a critical limiting factor to scaling AI compute.
The AI Factory Owner’s Pain Points
Data Center operators know the struggle:
- Cable trays hit limits fast. Bulky copper cables consume space and block airflow.
- Rear-of-rack congestion slows liquid-cooling retrofits. Bend radius and routing constraints make every change a headache.
- Commissioning drags. Crews battle heavy conductors, torque bulky terminations, and chase faults in overcrowded terminations.
- Losses remain stubborn. Roughly 10% of data center power vanishes in distribution and conversion.
- Safety and sustainability risks escalate. Every kilogram of copper is embodied carbon today and represents potential e-waste tomorrow.
In short, today’s power delivery methods are colliding with tomorrow’s density—and that friction shows up as schedule risk, safety risk, and escalating OpEx. This isn’t a problem of inefficiency; it’s a problem of scalability. The rack-level power model built for CPU loads has collapsed under GPU surges.
This surge is stressing infrastructure in several ways:
- Distribution loss: Every extra power conversion stage bleeds watts; long copper runs add resistive power loss, forcing overprovisioning and increasing cooling requirements.
- Physical bulk: Heavy gauge round wire conductors are hard to route, consume white space, constrict airflow, and cap rack density.
- Installation risk: Bulky terminations increase touchpoints and invite human error in assembly.
Rethinking Power Delivery in Alignment with OCP’s Core Tenets
Meeting AI demand isn’t about adding thicker cables or more copper layers in a printed circuit board—it requires a fundamentally new power delivery model. We need solutions that are lighter, denser, more efficient, and built for automation, so operators can deploy faster, pack more compute per unit footprint, and cut distribution losses.
CelLink’s solutions advance the Open Compute Project Foundation’s core tenets of Efficiency, Impact, Scalability, Openness, and Sustainability.
CelLink’s Breakthrough Solution
CelLink already has a track record of driving power delivery innovations in the automotive industry through the deployment of over three million flex harnesses into electric vehicles on the road, and we also have a presence in industrial, aerospace, and power storage applications. We’re now turning our attention to the largest power delivery challenge in the world today: Flex harness technology that redefines what’s possible in data center power delivery. Our solutions aim to address present-day core challenges of power delivery, while also unlocking robotic assembly for systems and removing manual wire terminations and the errors and delays that come with them.
Across the industry, designs are moving toward 1 megawatt IT racks supported by liquid cooling. CelLink’s flex harnesses, already capable of carrying 1 megawatt per harness in the automotive industry, align with that trajectory by freeing rear-of-rack space for cooling hardware, simplifying tight cable routing in high-density racks, and enabling automation-first builds that keep pace with liquid-cooled deployments. Our POC demonstration at the OCP Summit will showcase the integration of liquid cooling directly into power delivery.
Potential benefits for AI factory owners include:
- Greater Compute Density: Thinner, patterned-to-shape cabling means more room for compute hardware per rack.
- Lower material use: Less conductor mass translates to a reduced environmental footprint.
- Compatible with automation: Robotic assembly and a software-defined, reproducible conductor footprint streamlines buildouts, eliminates manual error, and improves field reliability.
- Faster construction: Lighter, patterned-to-shape harnesses and robotic terminations compress install, QA, and rework time cycles.
- Efficiency gains: Backside liquid cooling of vertical power VRMs keeps VRMs operating at high efficiency, reducing waste heat by tens to hundreds of watts per server.
- Future-proofs infrastructure: Positions operators to handle the surge toward gigawatt campuses and megawatt-class racks without being choked by legacy cabling constraints.
What’s Next
With energy consumption projected to double by 2030, solving power delivery is not optional. CelLink’s innovation represents more than a clever engineering tweak; it signals a revolution in power delivery. By replacing bulky round wire cabling with a flat alternative, server manufacturers gain a clear path to building AI factories sustainably and at speed.
The industry has reimagined compute, networking, and cooling. Now it’s time to reimagine power delivery. Check out this tech brief for more details on CelLink’s solutions and connect with CelLink in the Innovation Village at the OCP Summit in San Jose, October 13-16, to learn more.