
8 Ways AI Will Rewrite Data Center Infrastructure in 2026
Predicting the future of data centers is always a gamble, but one thing is clear: 2026 will be a year of reckoning. The industry is no longer just powering the digital world – it is becoming the backbone of modern society.
AI is at the center of this shift. AI factories became the new benchmark for hyperscalers in 2025. In 2026, their influence extends much further: power, cooling, space, and supply chains are all being reshaped around AI’s appetite for compute.
At the same time, the data center industry is rapidly maturing and starting to look and behave like a utility. Energy availability, grid stability, and long-term resource planning are now board-level topics.
This first part of my two-part 2026 predictions series looks at the physical side of that transformation: how AI will rewrite data center infrastructure in 2026.
1. AI Factories Go Gigawatt-Scale
In 2026, we will see the first wave of truly gigawatt-scale AI campuses moving from announcement to reality.
Hyperscalers are pouring billions into custom silicon, liquid-cooled mega-clusters, and, in some cases, dedicated power infrastructure. They are effectively building digital power plants: facilities where the fuel is energy and data, and the output is AI models and services—and residual heat.
These projects put tremendous strain on local grids. Large AI training jobs that have spiking compute demands already have a visible impact on grid stability. To keep building, operators and utilities will need to plan together: long-term contracts, shared investments in new generation, and smarter demand management. Not every data center will be an AI factory, but the ones that are will set the pattern for utility-scale digital infrastructure.
2. 800 VDC and Advanced Liquid Cooling Become the New Baseline
Traditional power distribution and air cooling are hitting their limits.
Architectures like NVIDIA’s Kyber racks – with vertical compute blades, 800-volt direct current (VDC) distribution, and liquid cooling – point to where high-density AI infrastructure is heading. Higher voltage means lower losses, less copper, and more efficient use of space and power.
In 2026, 800 VDC and direct-to-chip or cold-plate liquid cooling will start to move from “bleeding edge” to “expected baseline” for dense AI racks. Operators that design new facilities around legacy assumptions risk locking themselves out of future deployments.
3. OCP Becomes the Default Playbook Beyond Hyperscalers
The momentum behind the Open Compute Project (OCP) is now, in my humble opinion, unstoppable.
What began as a hyperscaler-driven effort has become a mainstream movement. OCP’s open standards and reference designs are increasingly the only realistic way for next-wave cloud providers to approach AI-ready infrastructure without reinventing everything themselves.
NVIDIA’s MGX ecosystem and OCP’s work on busbars and liquid-cooled power shelves are turning OCP into the common language for building dense, efficient AI clusters. In 2026, OCP will shift from “interesting option” to “default starting point” for new AI capacity, especially for those without hyperscaler budgets.
4. AI-Readiness Becomes a Standard Design Requirement
Not every facility will become a full AI factory, but data centers will need to accommodate some level of AI compute capacity.
Hyperscalers will dominate training of the largest models. But inference – and smaller-scale training and fine-tuning – will be everywhere. Enterprises want to use their own data for vertical-specific use cases, without sending everything to a public cloud.
That means even “general purpose” sites will adapt: carving out high-density AI pods, upgrading network fabrics, and adjusting power and cooling envelopes. In 2026, being “AI-ready” stops being a marketing phrase and becomes a basic design requirement.
5. Edge AI Gives Forgotten Sites a Second Life
Edge computing is experiencing a renaissance.
Edge devices capable of running AI workloads are unlocking new autonomous capabilities in cities, factories, logistics, and retail. These use cases demand low latency and local data processing. Shipping everything back to a central AI factory simply does not work in every scenario.
In 2026, more operators will repurpose older or smaller facilities as edge AI nodes. Sites that previously hosted caches or basic web workloads will be upgraded to run inference clusters, small training jobs, and data aggregation pipelines. For many smaller players, winning at the edge will be more realistic than competing in hyperscale training.
6. Quantum Readiness Quietly Seeps into Data Center Planning
AI dominates headlines, but quantum is quietly entering the conversation.
The immediate impact in 2026 will be post-quantum cryptography rather than quantum compute capacity in every data center. As awareness of “harvest now, decrypt later” strategies grow, operators will look at quantum-resistant encryption schemes across networks and storage.
Government roadmaps, such as the U.S. CNSA 2.0 milestones, are already shaping procurement. New network equipment and security systems will increasingly be expected to support post-quantum algorithms. A handful of commercial quantum-focused facilities will appear, widening the capability gap between the “AI and quantum haves” and everyone else – and forcing operators to think about how their own data centers will eventually integrate with a quantum ecosystem.
7. Supply Chain Pain Pushes a Circular Infrastructure Mindset
The ripple effect of AI adoption continues to hammer the supply chain. GPU, memory, and storage shortages, longer lead times, and rising prices are not going away in 2026.
Under that pressure, the industry will move toward more circular models. Reuse of infrastructure will become more common. Life cycles for servers, racks, and power gear will be extended. Retrofits will be preferred over greenfield builds when possible.
Instead of ripping and replacing entire halls, operators will look at modular upgrades: swapping accelerator trays while reusing power, cooling, and networking backbones. Older facilities and hardware will be repurposed as edge nodes or secondary inference sites. Scarcity and sustainability will finally be aligned, not in conflict.
8. Photonics and Extreme Frontiers Test the Limits
As AI systems scale out, copper is struggling to keep up. You can only push so much bandwidth over so much distance before losses become unacceptable.
In 2026, photonics moves from science project to serious pilot. We will see more experiments with optical interconnects inside and between racks, aiming to cut power and boost bandwidth. With land and energy constraints mounting, hyperscalers are eyeing extreme frontiers. Google’s patents for orbital datacenters and projects like Holland Datacenters’ Cyberbunker hint at a future where datacenters operate in space or underground. These solutions could reduce Earth’s footprint—or simply offload the problem to a new domain. Either way, they’re exclusive, expensive, and energy-intensive to launch and maintain.
At the same time, a handful of players will test extreme frontiers: underground bunkers, underwater modules, even orbital data center concepts. These are niche experiments, but they show how far the industry is willing to go to secure power, cooling, and space.
The common thread: once data centers start acting like utilities, they face the same hard questions. Where do you put them? How do they interact with communities and the environment? And what happens when they fail?
The Road to Data Centers-as-Utilities
Taken together, these shifts point toward a simple conclusion: in 2026, data centers will look less like anonymous buildings full of servers and more like complex, utility-grade plants engineered around AI.
AI is the forcing function, but the implications go far beyond adding GPUs. Power architectures, cooling designs, supply chains, and site strategies are all being rewritten.
In Part 2, we move from steel and silicon to power, policy, and public trust – and explore how regulation, sovereignty, and ethics will shape the next chapter of data centers as the new utilities.
Predicting the future of data centers is always a gamble, but one thing is clear: 2026 will be a year of reckoning. The industry is no longer just powering the digital world – it is becoming the backbone of modern society.
AI is at the center of this shift. AI factories became the new benchmark for hyperscalers in 2025. In 2026, their influence extends much further: power, cooling, space, and supply chains are all being reshaped around AI’s appetite for compute.
At the same time, the data center industry is rapidly maturing and starting to look and behave like a utility. Energy availability, grid stability, and long-term resource planning are now board-level topics.
This first part of my two-part 2026 predictions series looks at the physical side of that transformation: how AI will rewrite data center infrastructure in 2026.
1. AI Factories Go Gigawatt-Scale
In 2026, we will see the first wave of truly gigawatt-scale AI campuses moving from announcement to reality.
Hyperscalers are pouring billions into custom silicon, liquid-cooled mega-clusters, and, in some cases, dedicated power infrastructure. They are effectively building digital power plants: facilities where the fuel is energy and data, and the output is AI models and services—and residual heat.
These projects put tremendous strain on local grids. Large AI training jobs that have spiking compute demands already have a visible impact on grid stability. To keep building, operators and utilities will need to plan together: long-term contracts, shared investments in new generation, and smarter demand management. Not every data center will be an AI factory, but the ones that are will set the pattern for utility-scale digital infrastructure.
2. 800 VDC and Advanced Liquid Cooling Become the New Baseline
Traditional power distribution and air cooling are hitting their limits.
Architectures like NVIDIA’s Kyber racks – with vertical compute blades, 800-volt direct current (VDC) distribution, and liquid cooling – point to where high-density AI infrastructure is heading. Higher voltage means lower losses, less copper, and more efficient use of space and power.
In 2026, 800 VDC and direct-to-chip or cold-plate liquid cooling will start to move from “bleeding edge” to “expected baseline” for dense AI racks. Operators that design new facilities around legacy assumptions risk locking themselves out of future deployments.
3. OCP Becomes the Default Playbook Beyond Hyperscalers
The momentum behind the Open Compute Project (OCP) is now, in my humble opinion, unstoppable.
What began as a hyperscaler-driven effort has become a mainstream movement. OCP’s open standards and reference designs are increasingly the only realistic way for next-wave cloud providers to approach AI-ready infrastructure without reinventing everything themselves.
NVIDIA’s MGX ecosystem and OCP’s work on busbars and liquid-cooled power shelves are turning OCP into the common language for building dense, efficient AI clusters. In 2026, OCP will shift from “interesting option” to “default starting point” for new AI capacity, especially for those without hyperscaler budgets.
4. AI-Readiness Becomes a Standard Design Requirement
Not every facility will become a full AI factory, but data centers will need to accommodate some level of AI compute capacity.
Hyperscalers will dominate training of the largest models. But inference – and smaller-scale training and fine-tuning – will be everywhere. Enterprises want to use their own data for vertical-specific use cases, without sending everything to a public cloud.
That means even “general purpose” sites will adapt: carving out high-density AI pods, upgrading network fabrics, and adjusting power and cooling envelopes. In 2026, being “AI-ready” stops being a marketing phrase and becomes a basic design requirement.
5. Edge AI Gives Forgotten Sites a Second Life
Edge computing is experiencing a renaissance.
Edge devices capable of running AI workloads are unlocking new autonomous capabilities in cities, factories, logistics, and retail. These use cases demand low latency and local data processing. Shipping everything back to a central AI factory simply does not work in every scenario.
In 2026, more operators will repurpose older or smaller facilities as edge AI nodes. Sites that previously hosted caches or basic web workloads will be upgraded to run inference clusters, small training jobs, and data aggregation pipelines. For many smaller players, winning at the edge will be more realistic than competing in hyperscale training.
6. Quantum Readiness Quietly Seeps into Data Center Planning
AI dominates headlines, but quantum is quietly entering the conversation.
The immediate impact in 2026 will be post-quantum cryptography rather than quantum compute capacity in every data center. As awareness of “harvest now, decrypt later” strategies grow, operators will look at quantum-resistant encryption schemes across networks and storage.
Government roadmaps, such as the U.S. CNSA 2.0 milestones, are already shaping procurement. New network equipment and security systems will increasingly be expected to support post-quantum algorithms. A handful of commercial quantum-focused facilities will appear, widening the capability gap between the “AI and quantum haves” and everyone else – and forcing operators to think about how their own data centers will eventually integrate with a quantum ecosystem.
7. Supply Chain Pain Pushes a Circular Infrastructure Mindset
The ripple effect of AI adoption continues to hammer the supply chain. GPU, memory, and storage shortages, longer lead times, and rising prices are not going away in 2026.
Under that pressure, the industry will move toward more circular models. Reuse of infrastructure will become more common. Life cycles for servers, racks, and power gear will be extended. Retrofits will be preferred over greenfield builds when possible.
Instead of ripping and replacing entire halls, operators will look at modular upgrades: swapping accelerator trays while reusing power, cooling, and networking backbones. Older facilities and hardware will be repurposed as edge nodes or secondary inference sites. Scarcity and sustainability will finally be aligned, not in conflict.
8. Photonics and Extreme Frontiers Test the Limits
As AI systems scale out, copper is struggling to keep up. You can only push so much bandwidth over so much distance before losses become unacceptable.
In 2026, photonics moves from science project to serious pilot. We will see more experiments with optical interconnects inside and between racks, aiming to cut power and boost bandwidth. With land and energy constraints mounting, hyperscalers are eyeing extreme frontiers. Google’s patents for orbital datacenters and projects like Holland Datacenters’ Cyberbunker hint at a future where datacenters operate in space or underground. These solutions could reduce Earth’s footprint—or simply offload the problem to a new domain. Either way, they’re exclusive, expensive, and energy-intensive to launch and maintain.
At the same time, a handful of players will test extreme frontiers: underground bunkers, underwater modules, even orbital data center concepts. These are niche experiments, but they show how far the industry is willing to go to secure power, cooling, and space.
The common thread: once data centers start acting like utilities, they face the same hard questions. Where do you put them? How do they interact with communities and the environment? And what happens when they fail?
The Road to Data Centers-as-Utilities
Taken together, these shifts point toward a simple conclusion: in 2026, data centers will look less like anonymous buildings full of servers and more like complex, utility-grade plants engineered around AI.
AI is the forcing function, but the implications go far beyond adding GPUs. Power architectures, cooling designs, supply chains, and site strategies are all being rewritten.
In Part 2, we move from steel and silicon to power, policy, and public trust – and explore how regulation, sovereignty, and ethics will shape the next chapter of data centers as the new utilities.



