
Seeing the Light: Overcoming the 3 Hard Walls of Agentic AI
It’s fair to ask whether AI in 2026 is a bubble. The echoes of the early 2000s are real: valuations running ahead of revenues, plenty of compelling tech, and plenty of fuzzy business models. We’ve seen this movie before.
But here’s what feels different this time. We’ve now seen AI deliver real, tangible value, from agentic systems like self-driving cars to generative models like ChatGPT, Gemini, and Claude. New workflows are already reshaping engineering and productivity. The value is real, even if the business models are still forming. What’s no longer speculative is what AI demands in practice: massive compute, running continuously, coordinated across thousands, and soon millions of processing elements.
And where compute goes, networking must follow.
Training isn’t just about FLOPS; it’s about keeping GPUs fed and synchronized—moving data between accelerators, memory tiers, and storage with tight timing. Inference at scale isn’t “lightweight” either. Agentic systems add constant coordination, state exchange, and feedback loops. This is persistent, symmetric traffic, less like consumer internet burstiness, more like an industrial control system that hates latency and variance.
So, while the top of the stack is still sorting itself out, the bottom of the stack is converging. Those infrastructure requirements are driving real decisions: AI-first data centers, power secured years out, liquid cooling systems designed in from day one, and campuses planned as a single distributed computer.
In the dot-com era, Alan Greenspan famously cautioned against “irrational exuberance.” What’s unfolding now feels more deliberate and methodical, albeit no less exuberant. It manifests not in pitch decks, but in data centers, power contracts, and miles of fiber.
From AI Debate to Infrastructure Bottleneck
Early in any technology cycle, progress is driven by ideas. Better algorithms. Smarter software. More elegant abstractions. Over time, however, the limiting factor shifts from what we can imagine to what we can physically deploy.
That shift is now unmistakable in AI.
Regardless of which hyperscaler wins, which model architecture dominates, or which application becomes the killer use case, the requirements inside the data center are converging quickly. AI systems must be dramatically faster, far denser, and far more tightly coupled than anything the industry has operated before—not just larger clusters, but clusters that behave as a single, synchronized system.
For years, optics and networking evolved as predictable plumbing. Bandwidth increased incrementally. Power budgets were manageable. Traffic patterns were relatively well behaved. That trajectory worked for cloud computing and the consumer internet.
AI introduces a discontinuity.
When that linear roadmap is mapped against the demands of large-scale training, generative inference, and agentic workloads, the gap becomes obvious. East–west traffic explodes. Latency consistency matters as much as raw throughput. GPUs grow intolerant of waiting. At scale, the cost, and energy, of moving data begins to rival the cost of computing on it.
This is how industries respond to step changes: they build the substrate first.
Hyperscalers and vendors are investing ahead of certainty—not betting on a single application or winner, but on the belief that AI will require fundamentally different physical systems. In doing so, they are running into a new reality: scaling AI is no longer gated by software ambition alone. It is increasingly constrained by three intertwined limits—speed, thermals, and power delivery.
Those constraints now define the AI infrastructure roadmap.
The Three Walls—and Why Optics Sits at the Center of All of Them
As AI systems scale, the industry is no longer debating abstract limits. It is colliding with three very concrete ones. They arrive together, reinforce each other, and cannot be solved independently.
These are the three walls now shaping AI infrastructure: speed, thermal envelope, and power delivery.
Wall #1: The Speed Wall (and the Copper Limit)
AI workloads demand orders of magnitude more data movement than previous generations of compute. Training large models requires constant synchronization across thousands of accelerators, while emerging agentic systems add persistent coordination and state exchange across distributed components.
To meet that demand, signaling speeds have been pushed relentlessly higher — and this is where physics intrudes.
At the frequencies required for modern AI interconnects, copper becomes a fundamental constraint. Signal integrity degrades rapidly with distance. Loss rises. Reach collapses dramatically from meters to centimeters. At scale, this creates a hard architectural ceiling.
This is not simply a matter of faster PHYs. As AI clusters expand beyond a single rack or building into “scale-across” systems, bandwidth and latency become inseparable. Propagation delay matters as much as throughput, and copper simply cannot preserve both over distance.
Optics relaxes this constraint by delivering far higher bandwidth while maintaining reach and latency as systems scale across racks, buildings, and campuses.
Wall #2: The Thermal Wall (Why “Just Go Faster” Fails)
Even where copper can deliver sufficient speed, it increasingly fails on heat.
As electrical signaling rates rise, resistive losses convert a growing share of energy directly into heat. In high-density AI racks, this creates a feedback loop: higher speed drives more heat, which demands more cooling, which consumes more power and constrains further scaling.
This is why liquid cooling has moved from an optimization to a requirement in modern AI infrastructure. At rack densities well beyond 100 kW, thermals increasingly shift from an operational concern to an architectural one.
Optics changes this equation by reducing resistive loss at the source. Moving data as light — and shortening or eliminating electrical paths through approaches like co-packaged optics — lowers heat generation and expands the thermal envelope available for compute.
At AI scale, optics isn’t about going faster. It’s about not melting the system while doing so.
Wall #3: The Power Delivery Wall (The Grid Becomes the Limit)
The final wall is the most unforgiving: power delivery.
In practice, many data centers are now constrained less by space or fiber availability than by access to electricity itself. New facilities are increasingly sited where power is available, near hydroelectric, nuclear, or renewable sources rather than where latency is most convenient.
In the cloud era, we measured success in Gigabits per second. In the Agentic era, one of the defining metrics increasingly becomes Joules per Inference. We are moving from a performance-constrained world to an energy-constrained one. Power must be budgeted hierarchically: per server, per rack, per row, per facility. One of the largest and fastest-growing consumers of that power is data movement, particularly the repeated conversion between electrical and optical domains.
The math is sobering. At scale, the energy spent moving bits can rival the energy spent computing on them.
Optics is central here not just because it is efficient, but because it enables efficiency everywhere. By doing more in light and less in copper — and by pushing optical interfaces closer to compute — operators can reduce energy per bit, per port, and per rack, freeing scarce power for actual computation.
This is what allows power-constrained data centers to continue scaling, and what makes it feasible to couple multiple facilities into much larger virtual systems.
Why Optics Solves All Three Walls at Once
These three walls are tightly coupled. Solving one in isolation makes the others worse. Faster electrical signaling increases heat. More cooling increases power draw. Greater power demand stresses both facilities and the grid, capping further scale.
This coupling is what makes AI infrastructure different from previous compute cycles.
Optics is unique because it relaxes all three constraints simultaneously. It delivers the bandwidth and reach required for scale-across architectures, reduces thermal load by minimizing resistive loss, and lowers energy consumption per bit, freeing scarce power for computation rather than transport.
That combination is why optics has moved from predictable plumbing to a first-order architectural consideration. Across components, systems, and emerging approaches like optical switching and co-packaged optics, the industry is increasingly using light to break limits that electrons can no longer navigate efficiently.
This shift applies not only to new builds. Existing data centers are being retrofitted to accommodate AI workloads, driving additional optical demand as legacy, copper-heavy designs are reworked to survive higher speeds, tighter thermal envelopes, and stricter power budgets.
Optics doesn’t eliminate tradeoffs, but at AI scale, it expands the feasible design space in ways no other approach can.
Is AI a Bubble?
We still don’t know which applications will dominate, which business models will endure, or which hyperscalers will capture the most value. Those questions remain open.
But one thing is no longer in doubt.
Whatever form AI ultimately takes, it will require a fundamentally new physical substrate — one that is faster, more deterministic, and dramatically more power-efficient than what came before. That substrate is being built now, and it is being driven by optics.
This is not speculation. It is infrastructure.
And infrastructure, once committed to at this scale, has a way of shaping the future regardless of who wins the race at the top of the stack.
History offers a useful parallel. After World War II, the United States embarked on an enormous infrastructure project: the interstate highway system. It was built without knowing exactly where people would live, which cities would boom, or which industries would dominate. It was built on a conviction that mobility would matter, and that the country would be better off prepared for wherever it led.
The AI infrastructure build-out has the same shape.
Data centers, power delivery, cooling systems, and optical interconnects are being constructed not because the industry has perfect clarity on applications or economics, but because it has conviction that AI will be foundational. Once that conviction takes hold, infrastructure becomes destiny.
This is why this moment feels different from past bubbles. Software cycles can inflate and deflate. Markets can overshoot and correct. But when an industry runs into hard physical limits, the response is not debate. It is construction.
Many AI companies will fail. Some valuations will reset. Entire categories will consolidate or disappear. That is how every major cycle unfolds.
But the infrastructure being built now will not vanish with the noise. Like the highways of the last century, it will outlive the narratives that justified its construction and quietly shape everything that comes next.
After World War II, we paved the country with concrete and asphalt. Today, we are doing it again, this time with photons, lasers, and fiber.
We are building massive highways of light.
The applications will change. The winners will shift. The economics will evolve.
But the highways will remain.
It’s fair to ask whether AI in 2026 is a bubble. The echoes of the early 2000s are real: valuations running ahead of revenues, plenty of compelling tech, and plenty of fuzzy business models. We’ve seen this movie before.
But here’s what feels different this time. We’ve now seen AI deliver real, tangible value, from agentic systems like self-driving cars to generative models like ChatGPT, Gemini, and Claude. New workflows are already reshaping engineering and productivity. The value is real, even if the business models are still forming. What’s no longer speculative is what AI demands in practice: massive compute, running continuously, coordinated across thousands, and soon millions of processing elements.
And where compute goes, networking must follow.
Training isn’t just about FLOPS; it’s about keeping GPUs fed and synchronized—moving data between accelerators, memory tiers, and storage with tight timing. Inference at scale isn’t “lightweight” either. Agentic systems add constant coordination, state exchange, and feedback loops. This is persistent, symmetric traffic, less like consumer internet burstiness, more like an industrial control system that hates latency and variance.
So, while the top of the stack is still sorting itself out, the bottom of the stack is converging. Those infrastructure requirements are driving real decisions: AI-first data centers, power secured years out, liquid cooling systems designed in from day one, and campuses planned as a single distributed computer.
In the dot-com era, Alan Greenspan famously cautioned against “irrational exuberance.” What’s unfolding now feels more deliberate and methodical, albeit no less exuberant. It manifests not in pitch decks, but in data centers, power contracts, and miles of fiber.
From AI Debate to Infrastructure Bottleneck
Early in any technology cycle, progress is driven by ideas. Better algorithms. Smarter software. More elegant abstractions. Over time, however, the limiting factor shifts from what we can imagine to what we can physically deploy.
That shift is now unmistakable in AI.
Regardless of which hyperscaler wins, which model architecture dominates, or which application becomes the killer use case, the requirements inside the data center are converging quickly. AI systems must be dramatically faster, far denser, and far more tightly coupled than anything the industry has operated before—not just larger clusters, but clusters that behave as a single, synchronized system.
For years, optics and networking evolved as predictable plumbing. Bandwidth increased incrementally. Power budgets were manageable. Traffic patterns were relatively well behaved. That trajectory worked for cloud computing and the consumer internet.
AI introduces a discontinuity.
When that linear roadmap is mapped against the demands of large-scale training, generative inference, and agentic workloads, the gap becomes obvious. East–west traffic explodes. Latency consistency matters as much as raw throughput. GPUs grow intolerant of waiting. At scale, the cost, and energy, of moving data begins to rival the cost of computing on it.
This is how industries respond to step changes: they build the substrate first.
Hyperscalers and vendors are investing ahead of certainty—not betting on a single application or winner, but on the belief that AI will require fundamentally different physical systems. In doing so, they are running into a new reality: scaling AI is no longer gated by software ambition alone. It is increasingly constrained by three intertwined limits—speed, thermals, and power delivery.
Those constraints now define the AI infrastructure roadmap.
The Three Walls—and Why Optics Sits at the Center of All of Them
As AI systems scale, the industry is no longer debating abstract limits. It is colliding with three very concrete ones. They arrive together, reinforce each other, and cannot be solved independently.
These are the three walls now shaping AI infrastructure: speed, thermal envelope, and power delivery.
Wall #1: The Speed Wall (and the Copper Limit)
AI workloads demand orders of magnitude more data movement than previous generations of compute. Training large models requires constant synchronization across thousands of accelerators, while emerging agentic systems add persistent coordination and state exchange across distributed components.
To meet that demand, signaling speeds have been pushed relentlessly higher — and this is where physics intrudes.
At the frequencies required for modern AI interconnects, copper becomes a fundamental constraint. Signal integrity degrades rapidly with distance. Loss rises. Reach collapses dramatically from meters to centimeters. At scale, this creates a hard architectural ceiling.
This is not simply a matter of faster PHYs. As AI clusters expand beyond a single rack or building into “scale-across” systems, bandwidth and latency become inseparable. Propagation delay matters as much as throughput, and copper simply cannot preserve both over distance.
Optics relaxes this constraint by delivering far higher bandwidth while maintaining reach and latency as systems scale across racks, buildings, and campuses.
Wall #2: The Thermal Wall (Why “Just Go Faster” Fails)
Even where copper can deliver sufficient speed, it increasingly fails on heat.
As electrical signaling rates rise, resistive losses convert a growing share of energy directly into heat. In high-density AI racks, this creates a feedback loop: higher speed drives more heat, which demands more cooling, which consumes more power and constrains further scaling.
This is why liquid cooling has moved from an optimization to a requirement in modern AI infrastructure. At rack densities well beyond 100 kW, thermals increasingly shift from an operational concern to an architectural one.
Optics changes this equation by reducing resistive loss at the source. Moving data as light — and shortening or eliminating electrical paths through approaches like co-packaged optics — lowers heat generation and expands the thermal envelope available for compute.
At AI scale, optics isn’t about going faster. It’s about not melting the system while doing so.
Wall #3: The Power Delivery Wall (The Grid Becomes the Limit)
The final wall is the most unforgiving: power delivery.
In practice, many data centers are now constrained less by space or fiber availability than by access to electricity itself. New facilities are increasingly sited where power is available, near hydroelectric, nuclear, or renewable sources rather than where latency is most convenient.
In the cloud era, we measured success in Gigabits per second. In the Agentic era, one of the defining metrics increasingly becomes Joules per Inference. We are moving from a performance-constrained world to an energy-constrained one. Power must be budgeted hierarchically: per server, per rack, per row, per facility. One of the largest and fastest-growing consumers of that power is data movement, particularly the repeated conversion between electrical and optical domains.
The math is sobering. At scale, the energy spent moving bits can rival the energy spent computing on them.
Optics is central here not just because it is efficient, but because it enables efficiency everywhere. By doing more in light and less in copper — and by pushing optical interfaces closer to compute — operators can reduce energy per bit, per port, and per rack, freeing scarce power for actual computation.
This is what allows power-constrained data centers to continue scaling, and what makes it feasible to couple multiple facilities into much larger virtual systems.
Why Optics Solves All Three Walls at Once
These three walls are tightly coupled. Solving one in isolation makes the others worse. Faster electrical signaling increases heat. More cooling increases power draw. Greater power demand stresses both facilities and the grid, capping further scale.
This coupling is what makes AI infrastructure different from previous compute cycles.
Optics is unique because it relaxes all three constraints simultaneously. It delivers the bandwidth and reach required for scale-across architectures, reduces thermal load by minimizing resistive loss, and lowers energy consumption per bit, freeing scarce power for computation rather than transport.
That combination is why optics has moved from predictable plumbing to a first-order architectural consideration. Across components, systems, and emerging approaches like optical switching and co-packaged optics, the industry is increasingly using light to break limits that electrons can no longer navigate efficiently.
This shift applies not only to new builds. Existing data centers are being retrofitted to accommodate AI workloads, driving additional optical demand as legacy, copper-heavy designs are reworked to survive higher speeds, tighter thermal envelopes, and stricter power budgets.
Optics doesn’t eliminate tradeoffs, but at AI scale, it expands the feasible design space in ways no other approach can.
Is AI a Bubble?
We still don’t know which applications will dominate, which business models will endure, or which hyperscalers will capture the most value. Those questions remain open.
But one thing is no longer in doubt.
Whatever form AI ultimately takes, it will require a fundamentally new physical substrate — one that is faster, more deterministic, and dramatically more power-efficient than what came before. That substrate is being built now, and it is being driven by optics.
This is not speculation. It is infrastructure.
And infrastructure, once committed to at this scale, has a way of shaping the future regardless of who wins the race at the top of the stack.
History offers a useful parallel. After World War II, the United States embarked on an enormous infrastructure project: the interstate highway system. It was built without knowing exactly where people would live, which cities would boom, or which industries would dominate. It was built on a conviction that mobility would matter, and that the country would be better off prepared for wherever it led.
The AI infrastructure build-out has the same shape.
Data centers, power delivery, cooling systems, and optical interconnects are being constructed not because the industry has perfect clarity on applications or economics, but because it has conviction that AI will be foundational. Once that conviction takes hold, infrastructure becomes destiny.
This is why this moment feels different from past bubbles. Software cycles can inflate and deflate. Markets can overshoot and correct. But when an industry runs into hard physical limits, the response is not debate. It is construction.
Many AI companies will fail. Some valuations will reset. Entire categories will consolidate or disappear. That is how every major cycle unfolds.
But the infrastructure being built now will not vanish with the noise. Like the highways of the last century, it will outlive the narratives that justified its construction and quietly shape everything that comes next.
After World War II, we paved the country with concrete and asphalt. Today, we are doing it again, this time with photons, lasers, and fiber.
We are building massive highways of light.
The applications will change. The winners will shift. The economics will evolve.
But the highways will remain.



