
WEKA and Nebius Unite to Supercharge AI Workloads with GPUaaS Powerhouse
The infrastructure to support AI workloads is evolving as rapidly as AI workloads are growing.
In a strategic partnership announced today, WEKA and Nebius are meeting that challenge head-on – delivering a GPU-as-a-Service (GPUaaS) platform that brings ultra-high performance, scalability, and simplicity to the AI infrastructure market.
The solution integrates WEKA’s AI-native data platform with Nebius’ full-stack AI cloud, offering customers an infrastructure backbone purpose-built to handle the unique demands of AI model training and inference at scale. The partnership gives insight into how the next generation of AI cloud infrastructure will look.
Breaking Bottlenecks with AI-Native Storage and Compute
Enterprises training cutting-edge models often face infrastructure constraints in four areas: compute, memory, storage, and data management. These friction points stall innovation and increase time to value. The WEKA-Nebius collaboration addresses these limitations by delivering a cloud-native solution with microsecond latency, high throughput, and seamless scalability from petabytes to exabytes of data.
At the heart of the solution is Nebius’ GPU-rich AI Cloud, a purpose-built platform designed from the ground up for AI/ML workloads. Nebius blends proprietary cloud software, in-house hardware design, and developer-first tooling to deliver a streamlined environment for model builders, from startups to research institutions.
To fuel its premium tier, Nebius selected WEKA’s data platform, citing its consistent performance across mixed I/O workloads, robust metadata handling, and multitenancy capabilities – must-haves for large-scale AI environments.
“WEKA exceeded every expectation and requirement we had,” said Danila Shtan, CTO at Nebius. “It delivers outstanding throughput, IOPS, and low latency while managing mixed read/write workloads at scale.”
A Real-World Use Case: Scaling Innovation in Research
One of the first deployments of this integrated solution is already in action at a leading research institution. The organization selected Nebius to power its large-scale experimentation and AI model development and brought in WEKA to meet storage performance and manageability needs. The result? A multi-thousand-GPU cluster backed by 2PB of WEKA storage – delivering a fully managed, high-performance environment tailored for rigorous AI research.
Key features like user and directory quotas were critical in customizing the platform to the institution’s operational demands. And by pairing Nebius’ scalable compute with WEKA’s ultra-fast storage layer, the deployment ensures minimal bottlenecks and maximum utilization, accelerating time to insights.
The TechArena Take
This partnership exemplifies a key trend in enterprise AI: the rise of neoclouds. Unlike general-purpose hyperscalers, neocloud providers like Nebius offer tailored platforms for AI development, focusing on performance, control, and flexibility. These environments are quickly becoming the go-to solution for enterprises that want to move fast without compromising on power.
Meanwhile, WEKA continues to cement its position as the high-performance storage layer for AI, enabling faster training and smarter infrastructure utilization. In environments where every millisecond counts, the ability to reduce latency, improve GPU utilization, and eliminate data silos can be the difference between leadership and lag.
“Together, Nebius and WEKA are redefining what's possible when high-performance storage meets AI-first infrastructure,” said Liran Zvibel, WEKA CEO. “It’s a unified solution that is a catalyst for enterprise AI and agentic AI innovation.”
The WEKA-Nebius solution is a compelling model for what’s next: AI-native infrastructure as a service, where every layer of the stack is designed to accelerate AI.
Explore More
Learn how Nebius and WEKA are powering next-gen AI infrastructure.
The infrastructure to support AI workloads is evolving as rapidly as AI workloads are growing.
In a strategic partnership announced today, WEKA and Nebius are meeting that challenge head-on – delivering a GPU-as-a-Service (GPUaaS) platform that brings ultra-high performance, scalability, and simplicity to the AI infrastructure market.
The solution integrates WEKA’s AI-native data platform with Nebius’ full-stack AI cloud, offering customers an infrastructure backbone purpose-built to handle the unique demands of AI model training and inference at scale. The partnership gives insight into how the next generation of AI cloud infrastructure will look.
Breaking Bottlenecks with AI-Native Storage and Compute
Enterprises training cutting-edge models often face infrastructure constraints in four areas: compute, memory, storage, and data management. These friction points stall innovation and increase time to value. The WEKA-Nebius collaboration addresses these limitations by delivering a cloud-native solution with microsecond latency, high throughput, and seamless scalability from petabytes to exabytes of data.
At the heart of the solution is Nebius’ GPU-rich AI Cloud, a purpose-built platform designed from the ground up for AI/ML workloads. Nebius blends proprietary cloud software, in-house hardware design, and developer-first tooling to deliver a streamlined environment for model builders, from startups to research institutions.
To fuel its premium tier, Nebius selected WEKA’s data platform, citing its consistent performance across mixed I/O workloads, robust metadata handling, and multitenancy capabilities – must-haves for large-scale AI environments.
“WEKA exceeded every expectation and requirement we had,” said Danila Shtan, CTO at Nebius. “It delivers outstanding throughput, IOPS, and low latency while managing mixed read/write workloads at scale.”
A Real-World Use Case: Scaling Innovation in Research
One of the first deployments of this integrated solution is already in action at a leading research institution. The organization selected Nebius to power its large-scale experimentation and AI model development and brought in WEKA to meet storage performance and manageability needs. The result? A multi-thousand-GPU cluster backed by 2PB of WEKA storage – delivering a fully managed, high-performance environment tailored for rigorous AI research.
Key features like user and directory quotas were critical in customizing the platform to the institution’s operational demands. And by pairing Nebius’ scalable compute with WEKA’s ultra-fast storage layer, the deployment ensures minimal bottlenecks and maximum utilization, accelerating time to insights.
The TechArena Take
This partnership exemplifies a key trend in enterprise AI: the rise of neoclouds. Unlike general-purpose hyperscalers, neocloud providers like Nebius offer tailored platforms for AI development, focusing on performance, control, and flexibility. These environments are quickly becoming the go-to solution for enterprises that want to move fast without compromising on power.
Meanwhile, WEKA continues to cement its position as the high-performance storage layer for AI, enabling faster training and smarter infrastructure utilization. In environments where every millisecond counts, the ability to reduce latency, improve GPU utilization, and eliminate data silos can be the difference between leadership and lag.
“Together, Nebius and WEKA are redefining what's possible when high-performance storage meets AI-first infrastructure,” said Liran Zvibel, WEKA CEO. “It’s a unified solution that is a catalyst for enterprise AI and agentic AI innovation.”
The WEKA-Nebius solution is a compelling model for what’s next: AI-native infrastructure as a service, where every layer of the stack is designed to accelerate AI.
Explore More
Learn how Nebius and WEKA are powering next-gen AI infrastructure.