X

How Equinix & Solidigm See Evolving Workloads Reshaping IT

October 20, 2025

Once defined by monolithic architectures and predictable workloads, today’s enterprise data center strategies are shaped by the explosive rise of AI, the realities of hybrid multicloud, and the mounting pressure of regulatory and efficiency demands. I recently spoke with Glenn Dekhayser, global principal technologist at Equinix, and Scott Shadley, leadership marketing director at Solidigm, who shared their perspectives on how enterprises are adapting, and what it will take to succeed in the years ahead.

The conversation began with an important insight on data center infrastructure from Glenn, who noted that AI has “10x’d” hybrid multicloud architectures. As he explained, organizations are grappling with where to deploy AI workloads—cloud, GPU-as-a-service, on-premises, or edge. As those workloads move to production, they’re driving a fundamental shift toward dense power solutions and liquid cooling as enterprises seek to control costs and performance.  

But the real transformation is in how organizations think about data itself, with “data-centric” strategy, which while complex in execution, comes down to a simple idea. “Whatever you’re doing, creating value starts with your data,” said Glenn. For enterprises trying to extract new value streams out of data, that means now workloads come to that data, rather than the reverse. Enterprises are creating entire data marts to reflect the change that data no longer has one-to-one relationships with applications, and instead, multiple applications access shared datasets.  

This centralized data approach addresses the reality that while workloads are relatively easy to deploy and orchestrate, datasets carry constraints: they’re slow to move, require governance, and face compliance and sovereignty requirements. In response to these challenges, Glenn said he counsels customers to create an “authoritative core,” one copy of active datasets on equipment you control in locations you can access.  

This core, of course, must be balanced with the ability to project data where it needs to be for optimal governance, compliance, cost, and performance. That could be in public clouds, at the edge, or in the core data center. Many enterprises, however, “Are starting to realize they’re not necessarily set up to take advantage of all those places,” Glenn said. “And if their data architecture...[isn’t] ready to accommodate this mobility, they’re going to find themselves at a competitive disadvantage.”  

As organizations work to upgrade their data architecture to avoid such a disadvantage, data center infrastructure technology advancements are keeping pace. Scott explained how storage technology in particular is evolving to meet the needs of new data center strategies. Large capacity drives help enterprises keep data close and store more while using less power, while performance drives enable fast work with the data at hand, so storage doesn’t become a bottleneck. As he summarized, “A modern architecture of flash tiering, flash plus hard drives, is becoming even more and more valuable.”  

When asked about the one question enterprise data center managers should consider that they weren’t thinking about five years ago, both experts converged on a theme: architectural flexibility for unknown future requirements. Scott emphasized not just focusing on capital expenses, but considering operational expenses and building systems with five-year operational efficiency in mind. Glenn emphasized the unknown, saying he often asks organizations how they are “architected for change” in a world where the next transformational service provider could emerge from completely unexpected origins.  

The conversation culminated in an analogy: if the best time to plant a tree was 50 years ago, the second-best time is now. For enterprises sitting on large datasets in public cloud environments, the cost and complexity of data mobility only increase with time. For organizations that aren’t yet “architected for change,” the time to remedy that is now, not in one to two years when their data lake will be even larger, and therefore more difficult and more expensive to move.  

The TechArena Take  

The enterprise data center conversation has evolved from optimizing known workloads to architecting for unknowable futures. Equinix and Solidigm’s insights reveal that success increasingly depends on maintaining data sovereignty while preserving access to innovation. Organizations that establish authoritative data cores with agile connectivity to diverse service ecosystems today will be positioned to capitalize on tomorrow’s transformational opportunities, whatever form they may take.

Once defined by monolithic architectures and predictable workloads, today’s enterprise data center strategies are shaped by the explosive rise of AI, the realities of hybrid multicloud, and the mounting pressure of regulatory and efficiency demands. I recently spoke with Glenn Dekhayser, global principal technologist at Equinix, and Scott Shadley, leadership marketing director at Solidigm, who shared their perspectives on how enterprises are adapting, and what it will take to succeed in the years ahead.

The conversation began with an important insight on data center infrastructure from Glenn, who noted that AI has “10x’d” hybrid multicloud architectures. As he explained, organizations are grappling with where to deploy AI workloads—cloud, GPU-as-a-service, on-premises, or edge. As those workloads move to production, they’re driving a fundamental shift toward dense power solutions and liquid cooling as enterprises seek to control costs and performance.  

But the real transformation is in how organizations think about data itself, with “data-centric” strategy, which while complex in execution, comes down to a simple idea. “Whatever you’re doing, creating value starts with your data,” said Glenn. For enterprises trying to extract new value streams out of data, that means now workloads come to that data, rather than the reverse. Enterprises are creating entire data marts to reflect the change that data no longer has one-to-one relationships with applications, and instead, multiple applications access shared datasets.  

This centralized data approach addresses the reality that while workloads are relatively easy to deploy and orchestrate, datasets carry constraints: they’re slow to move, require governance, and face compliance and sovereignty requirements. In response to these challenges, Glenn said he counsels customers to create an “authoritative core,” one copy of active datasets on equipment you control in locations you can access.  

This core, of course, must be balanced with the ability to project data where it needs to be for optimal governance, compliance, cost, and performance. That could be in public clouds, at the edge, or in the core data center. Many enterprises, however, “Are starting to realize they’re not necessarily set up to take advantage of all those places,” Glenn said. “And if their data architecture...[isn’t] ready to accommodate this mobility, they’re going to find themselves at a competitive disadvantage.”  

As organizations work to upgrade their data architecture to avoid such a disadvantage, data center infrastructure technology advancements are keeping pace. Scott explained how storage technology in particular is evolving to meet the needs of new data center strategies. Large capacity drives help enterprises keep data close and store more while using less power, while performance drives enable fast work with the data at hand, so storage doesn’t become a bottleneck. As he summarized, “A modern architecture of flash tiering, flash plus hard drives, is becoming even more and more valuable.”  

When asked about the one question enterprise data center managers should consider that they weren’t thinking about five years ago, both experts converged on a theme: architectural flexibility for unknown future requirements. Scott emphasized not just focusing on capital expenses, but considering operational expenses and building systems with five-year operational efficiency in mind. Glenn emphasized the unknown, saying he often asks organizations how they are “architected for change” in a world where the next transformational service provider could emerge from completely unexpected origins.  

The conversation culminated in an analogy: if the best time to plant a tree was 50 years ago, the second-best time is now. For enterprises sitting on large datasets in public cloud environments, the cost and complexity of data mobility only increase with time. For organizations that aren’t yet “architected for change,” the time to remedy that is now, not in one to two years when their data lake will be even larger, and therefore more difficult and more expensive to move.  

The TechArena Take  

The enterprise data center conversation has evolved from optimizing known workloads to architecting for unknowable futures. Equinix and Solidigm’s insights reveal that success increasingly depends on maintaining data sovereignty while preserving access to innovation. Organizations that establish authoritative data cores with agile connectivity to diverse service ecosystems today will be positioned to capitalize on tomorrow’s transformational opportunities, whatever form they may take.

Transcript

Subscribe to TechArena

Subscribe