Product marketers have long relied on NIST for clarity and consistency — but with new frameworks emerging for AI, it's time to ask whether these guidelines go far enough in prioritizing fairness, safety, and accuracy.
At GTC 2025, a discussion between Deloitte and VAST showed how their partnership is scaling enterprise AI with secure, auditable infrastructure—bringing business value for next-gen, agentic AI adoption.
Verge.io’s George Crump shares how a unified infrastructure approach is driving efficiency, performance, and AI-readiness — without the legacy bloat.
At GTC 2025, Nebius and VAST shared how their collaboration delivers high-performance, scalable AI infrastructure for enterprise workloads—making cloud AI more usable and accessible.
MLPerf Inference 5.0 signals the rise of large language models, with LLAMA 2 70B surpassing ResNet-50 in submissions and driving next-gen AI performance across compute platforms.
MemryX, a provider of edge AI acceleration hardware, recently closed its latest round of funding, serving as a potential bellwether for the next growth edge in AI compute.
At Advancing AI, AMD unveils MI355 with 35× gen-over-gen gains and doubles down on open innovation – from ROCm 7 to Helios infrastructure – to challenge NVIDIA’s AI leadership.
The deal marks a strategic move to bolster Qualcomm’s AI and custom silicon capabilities amid challenging competition and the potential start of a wave of AI silicon acquisitions.
A new partnership combines WEKA’s AI-native storage with Nebius’ GPUaaS platform to accelerate model training, inference, and innovation with microsecond latency and extreme scalability.
As the battle for AI market share continues, AMD’s recent acquisitions signal a strategic move toward optimizing both software and hardware for inference workloads and real-world AI deployment.
The HPE-owned platform combines unified observability, smart alert correlation, and automation to tackle hybrid IT complexity while also working with existing monitoring tools.
AIStor’s stateless, gateway-free design solves legacy storage issues, enabling high-performance object-native infrastructure for exabyte-scale AI and analytics workloads.
State Street’s Anusha Nerella explores AI-driven FinTech infrastructure—scalability, governance and agentic computing. Interested in finding out more about the AI Infra Summit and seeing Anusha Nerella live? Find out more here.
In this episode of In the Arena, hear how cross-border collaboration, sustainability, and tech are shaping the future of patient care and innovation.
Tune in to our latest episode of In the Arena to discover how Verge.io’s unified infrastructure platform simplifies IT management, boosts efficiency, & prepares data centers for the AI-driven future.
Join us on Data Insights as Mark Klarzynski from PEAK:AIO explores how high-performance AI storage is driving innovation in conservation, healthcare, and edge computing for a sustainable future.
Untether AI's Bob Beachler explores the future of AI inference, from energy-efficient silicon to edge computing challenges, MLPerf benchmarks, and the evolving enterprise AI landscape.
Explore how OCP’s Composable Memory Systems group tackles AI-driven challenges in memory bandwidth, latency, and scalability to optimize performance across modern data centers.
State Street’s Anusha Nerella explores AI-driven FinTech infrastructure—scalability, governance and agentic computing. Interested in finding out more about the AI Infra Summit and seeing Anusha Nerella live? Find out more here.
In this episode of In the Arena, hear how cross-border collaboration, sustainability, and tech are shaping the future of patient care and innovation.
Tune in to our latest episode of In the Arena to discover how Verge.io’s unified infrastructure platform simplifies IT management, boosts efficiency, & prepares data centers for the AI-driven future.
Join us on Data Insights as Mark Klarzynski from PEAK:AIO explores how high-performance AI storage is driving innovation in conservation, healthcare, and edge computing for a sustainable future.
Untether AI's Bob Beachler explores the future of AI inference, from energy-efficient silicon to edge computing challenges, MLPerf benchmarks, and the evolving enterprise AI landscape.
Explore how OCP’s Composable Memory Systems group tackles AI-driven challenges in memory bandwidth, latency, and scalability to optimize performance across modern data centers.