
Will OCP Infiltrate NeoCloud?
The Open Compute Project Foundation (OCP) has undeniable impact on cloud deployments, and with $191B in forecasted infrastructure sales by 2029, there is no slowing this segment of the market. But many ask if OCP can transcend their current focus on hyperscalers to engage with neo-cloud providers. Who is neo-cloud? If you’re not familiar with the term, they are the large-scale players building infrastructure custom designed for AI workloads. Think CoreWeave, Lambda…or regional players like ScaleUp. And instead of building infrastructure to fail, as has been the case with the built in redundancy of the cloud, they are building infrastructure to scale to deliver the horsepower required for every aspect of AI.
OCP has been very CPU focused on its specs and marketplace, and with NVIDIA bringing some initial designs into the OCP ecosystem, the doors opened for neo-cloud influence. At the Dublin event this week, Solidigm, Fractile, FarmGPU, and ScaleUp discussed what it will take to make this segment of the market as robust as traditional hyperscale.
In the discussion, Fractile CEO Walter Goodwin pointed out that these players tend to push configurations even more aggressively, given the inherent challenges of accelerated computing. This can challenge traditional standards-based hardware’s speed of innovation. How can OCP deliver a standard without limiting designs to a perceived one-size-fits-all philosophy?
Discussion moderator Nilesh Shah from Zero Point Technologies pointed to an underlying change in infrastructure, moving towards a more storage-centric approach, including an example of OpenAI’s new storage-centric blueprint for accelerated computing. Finding a similar focus and pulling the storage industry closer into the center of OCP, he argued, would bring new approaches to configuration alternatives. Beyond storage centricity, Nilesh pointed to broader diversity of silicon design foundations for innovation, providing operators with different options for AI acceleration. Walter agreed, stating that his company’s planned product introduction in 2027, aimed at competing with NVIDIA GPUs, is designed for the rack and facility level – exactly where neo-cloud vendors are seeking accelerator alternatives.
JM Hands, CEO of FarmGPU, suggested that another opportunity for neo-cloud traction involves adoption of a mix-and-match approach to configurations, delivering a higher level of customization and a deconstruction of infrastructure to dial in exactly what providers need. Many of the challenges neo-cloud is grappling with, he argued, have been solved by the hyperscalers within OCP. Creating more flexible ways to tap this technology through the OCP marketplace would help spur interest and engagement from these service providers.
What’s the TechArena take? Neo-cloud is still arguably in its infancy. Yes, that is ridiculous given the valuation of some of these companies, but with AI moving so fast and furious, it’s hard to remember that some of these companies were not delivering services just a few years ago. Today, many neo-cloud providers are functioning as warehouses for NVIDIA configurations for hyperscaler consumption. Will they remain a function of outsourced compute for hyperscaler balance sheet management, or will their businesses take them further afield as more enterprises start adopting generative and agentic AI at scale? The discussion certainly underscored the challenge in data centers today that NVIDIA designs are taking some of the air out of the room for flexible innovation, and neo-cloud’s current reliance on NVIDIA GPUs certainly aligns with a narrower approach to deployments, at least in the near term. We applaud the call for more memory and storage-centric thinking, given the growing complexity of feeding AI and avoiding what we affectionally call agentic dementia, something we’ll cover in depth in an upcoming post. We see an acute opportunity for OCP, which is more of an imperative to capture the AI curve fully to build a vibrant ecosystem. We also see the industry – from chiplet designers to integrated rack manufacturers and power and cooling providers – benefiting through a widened view on the market opportunity from OCP-centric innovation. We’ll be watching attentively come October to see signs of significant traction in this space.
The Open Compute Project Foundation (OCP) has undeniable impact on cloud deployments, and with $191B in forecasted infrastructure sales by 2029, there is no slowing this segment of the market. But many ask if OCP can transcend their current focus on hyperscalers to engage with neo-cloud providers. Who is neo-cloud? If you’re not familiar with the term, they are the large-scale players building infrastructure custom designed for AI workloads. Think CoreWeave, Lambda…or regional players like ScaleUp. And instead of building infrastructure to fail, as has been the case with the built in redundancy of the cloud, they are building infrastructure to scale to deliver the horsepower required for every aspect of AI.
OCP has been very CPU focused on its specs and marketplace, and with NVIDIA bringing some initial designs into the OCP ecosystem, the doors opened for neo-cloud influence. At the Dublin event this week, Solidigm, Fractile, FarmGPU, and ScaleUp discussed what it will take to make this segment of the market as robust as traditional hyperscale.
In the discussion, Fractile CEO Walter Goodwin pointed out that these players tend to push configurations even more aggressively, given the inherent challenges of accelerated computing. This can challenge traditional standards-based hardware’s speed of innovation. How can OCP deliver a standard without limiting designs to a perceived one-size-fits-all philosophy?
Discussion moderator Nilesh Shah from Zero Point Technologies pointed to an underlying change in infrastructure, moving towards a more storage-centric approach, including an example of OpenAI’s new storage-centric blueprint for accelerated computing. Finding a similar focus and pulling the storage industry closer into the center of OCP, he argued, would bring new approaches to configuration alternatives. Beyond storage centricity, Nilesh pointed to broader diversity of silicon design foundations for innovation, providing operators with different options for AI acceleration. Walter agreed, stating that his company’s planned product introduction in 2027, aimed at competing with NVIDIA GPUs, is designed for the rack and facility level – exactly where neo-cloud vendors are seeking accelerator alternatives.
JM Hands, CEO of FarmGPU, suggested that another opportunity for neo-cloud traction involves adoption of a mix-and-match approach to configurations, delivering a higher level of customization and a deconstruction of infrastructure to dial in exactly what providers need. Many of the challenges neo-cloud is grappling with, he argued, have been solved by the hyperscalers within OCP. Creating more flexible ways to tap this technology through the OCP marketplace would help spur interest and engagement from these service providers.
What’s the TechArena take? Neo-cloud is still arguably in its infancy. Yes, that is ridiculous given the valuation of some of these companies, but with AI moving so fast and furious, it’s hard to remember that some of these companies were not delivering services just a few years ago. Today, many neo-cloud providers are functioning as warehouses for NVIDIA configurations for hyperscaler consumption. Will they remain a function of outsourced compute for hyperscaler balance sheet management, or will their businesses take them further afield as more enterprises start adopting generative and agentic AI at scale? The discussion certainly underscored the challenge in data centers today that NVIDIA designs are taking some of the air out of the room for flexible innovation, and neo-cloud’s current reliance on NVIDIA GPUs certainly aligns with a narrower approach to deployments, at least in the near term. We applaud the call for more memory and storage-centric thinking, given the growing complexity of feeding AI and avoiding what we affectionally call agentic dementia, something we’ll cover in depth in an upcoming post. We see an acute opportunity for OCP, which is more of an imperative to capture the AI curve fully to build a vibrant ecosystem. We also see the industry – from chiplet designers to integrated rack manufacturers and power and cooling providers – benefiting through a widened view on the market opportunity from OCP-centric innovation. We’ll be watching attentively come October to see signs of significant traction in this space.