Discover all the highlights from OCP > VIEW our coverage
X

The Network Revolution: Cornelis Tackles AI’s Efficiency Crisis

October 28, 2025

As artificial intelligence (AI) systems grow increasingly complex and demanding, a critical bottleneck has emerged that threatens to limit the transformative potential of enterprise AI: network efficiency. While organizations pour billions into graphics processing units (GPUs) for compute power, a surprising percentage of that computational capacity sits idle, waiting for data to move through inefficient network architectures.

Lisa Spelman, CEO of Cornelis Networks, and I recently discussed her perspective on this challenge. During our conversation, she revealed how Cornelis is addressing what she calls “the efficiency problem plaguing AI and HPC mega systems” through network design that promises to unlock significantly more value from existing infrastructure investments.

The Hidden Cost of Network Inefficiency

The scale of the efficiency problem becomes clear when examining GPU utilization patterns. Research reveals that GPUs spend 15% to 30% of their time in non-math mode, purely handling communications rather than performing the calculations that drive AI breakthroughs. This represents billions of dollars in computational capacity that organizations have purchased but are not fully using.

“We are throwing more compute at the problem, putting more scale around the problem, putting more concrete, more power, all these things around the problem and saying we just have to brute force through these models,” Lisa explained. “We’ve got to move to elegance.”

That elegance comes through innovation that addresses bottlenecks at the system level. Multiple bottlenecks exist, but for the network, Cornelis has developed an end-to-end backend network architecture with unique features that improve GPU utilization and compute efficiency while maximizing the value of existing power budgets and infrastructure investments.

The Enterprise On-Premises Opportunity

While hyperscale cloud providers continue to drive frontier model development, Lisa identifies a significant opportunity in enterprise on-premises AI infrastructure. She notes that cloud providers currently capture 40 to 50% of the AI infrastructure market, which leaves substantial opportunity for enterprises, neoclouds, and sovereign cloud implementations that prioritize economics, privacy, security, and specific use case optimization.

This distributed approach to AI infrastructure creates new requirements for network efficiency, since enterprise implementations must maximize utilization within relatively constrained environments. The network becomes even more critical in these scenarios. Inefficiencies that might be tolerated by hyperscalers become prohibitive bottlenecks in enterprise deployments.

Real-World Impact Through System-Level Innovation

The practical benefits of network optimization extend far beyond theoretical performance improvements. Measurably better results can be achieved through Cornelis Networks’ solutions, including the recently launched CN5000, a 400-gigabit end-to-end network platform.

These improvements manifest in multiple dimensions: better GPU utilization translates to faster model training and inference, and reduced power consumption per workload enables more intensive processing within existing power budgets. Improved overall system efficiency allows organizations to tackle larger problems with the same hardware investments, delivering system-level benefits that improve total cost of ownership and accelerate time to value for enterprise AI initiatives.

Looking Ahead: Sustainable AI Infrastructure

Recent studies suggest that over 90% of enterprise AI efforts struggle to achieve meaningful return on investment. However, Lisa believes the industry stands at an inflection point where that dynamic is about to reverse completely. And as organizations move from experimental AI projects to production deployments that must deliver measurable business value, efficiency optimization becomes crucial for long-term success.

Lisa’s confidence in this transformation stems from her experience across multiple technology waves, including her time managing IT infrastructure during the early cloud computing era. The pattern suggests that enterprises that embrace efficiency-focused AI infrastructure today will establish competitive advantages that become increasingly difficult for competitors to match.

The TechArena Take

Cornelis Networks’ approach addresses a critical gap in current AI infrastructure discussions. While much attention focuses on computational power and model sophistication, network efficiency represents an often-overlooked opportunity to unlock significant additional value from existing investments.

Lisa’s emphasis on moving from “brute force to elegance” reflects a maturing industry that recognizes sustainable AI deployment requires optimization across the entire infrastructure stack. Organizations that prioritize network efficiency alongside compute power will be better positioned to achieve the ROI that has proven elusive for many enterprise AI initiatives.

The convergence of AI-native enterprise cultures with efficiency-optimized infrastructure creates conditions for the kind of transformative business impact that will differentiate winners in the next phase of AI adoption.

For more insights on Cornelis Networks’ approach to AI infrastructure optimization, visit cornellisnetworks.com or connect with Cornelis Networks on LinkedIn.

Watch the Interview | Subscribe to Our Newsletter

As artificial intelligence (AI) systems grow increasingly complex and demanding, a critical bottleneck has emerged that threatens to limit the transformative potential of enterprise AI: network efficiency. While organizations pour billions into graphics processing units (GPUs) for compute power, a surprising percentage of that computational capacity sits idle, waiting for data to move through inefficient network architectures.

Lisa Spelman, CEO of Cornelis Networks, and I recently discussed her perspective on this challenge. During our conversation, she revealed how Cornelis is addressing what she calls “the efficiency problem plaguing AI and HPC mega systems” through network design that promises to unlock significantly more value from existing infrastructure investments.

The Hidden Cost of Network Inefficiency

The scale of the efficiency problem becomes clear when examining GPU utilization patterns. Research reveals that GPUs spend 15% to 30% of their time in non-math mode, purely handling communications rather than performing the calculations that drive AI breakthroughs. This represents billions of dollars in computational capacity that organizations have purchased but are not fully using.

“We are throwing more compute at the problem, putting more scale around the problem, putting more concrete, more power, all these things around the problem and saying we just have to brute force through these models,” Lisa explained. “We’ve got to move to elegance.”

That elegance comes through innovation that addresses bottlenecks at the system level. Multiple bottlenecks exist, but for the network, Cornelis has developed an end-to-end backend network architecture with unique features that improve GPU utilization and compute efficiency while maximizing the value of existing power budgets and infrastructure investments.

The Enterprise On-Premises Opportunity

While hyperscale cloud providers continue to drive frontier model development, Lisa identifies a significant opportunity in enterprise on-premises AI infrastructure. She notes that cloud providers currently capture 40 to 50% of the AI infrastructure market, which leaves substantial opportunity for enterprises, neoclouds, and sovereign cloud implementations that prioritize economics, privacy, security, and specific use case optimization.

This distributed approach to AI infrastructure creates new requirements for network efficiency, since enterprise implementations must maximize utilization within relatively constrained environments. The network becomes even more critical in these scenarios. Inefficiencies that might be tolerated by hyperscalers become prohibitive bottlenecks in enterprise deployments.

Real-World Impact Through System-Level Innovation

The practical benefits of network optimization extend far beyond theoretical performance improvements. Measurably better results can be achieved through Cornelis Networks’ solutions, including the recently launched CN5000, a 400-gigabit end-to-end network platform.

These improvements manifest in multiple dimensions: better GPU utilization translates to faster model training and inference, and reduced power consumption per workload enables more intensive processing within existing power budgets. Improved overall system efficiency allows organizations to tackle larger problems with the same hardware investments, delivering system-level benefits that improve total cost of ownership and accelerate time to value for enterprise AI initiatives.

Looking Ahead: Sustainable AI Infrastructure

Recent studies suggest that over 90% of enterprise AI efforts struggle to achieve meaningful return on investment. However, Lisa believes the industry stands at an inflection point where that dynamic is about to reverse completely. And as organizations move from experimental AI projects to production deployments that must deliver measurable business value, efficiency optimization becomes crucial for long-term success.

Lisa’s confidence in this transformation stems from her experience across multiple technology waves, including her time managing IT infrastructure during the early cloud computing era. The pattern suggests that enterprises that embrace efficiency-focused AI infrastructure today will establish competitive advantages that become increasingly difficult for competitors to match.

The TechArena Take

Cornelis Networks’ approach addresses a critical gap in current AI infrastructure discussions. While much attention focuses on computational power and model sophistication, network efficiency represents an often-overlooked opportunity to unlock significant additional value from existing investments.

Lisa’s emphasis on moving from “brute force to elegance” reflects a maturing industry that recognizes sustainable AI deployment requires optimization across the entire infrastructure stack. Organizations that prioritize network efficiency alongside compute power will be better positioned to achieve the ROI that has proven elusive for many enterprise AI initiatives.

The convergence of AI-native enterprise cultures with efficiency-optimized infrastructure creates conditions for the kind of transformative business impact that will differentiate winners in the next phase of AI adoption.

For more insights on Cornelis Networks’ approach to AI infrastructure optimization, visit cornellisnetworks.com or connect with Cornelis Networks on LinkedIn.

Watch the Interview | Subscribe to Our Newsletter

Transcript

Subscribe to TechArena

Subscribe