At Advancing AI, AMD unveils MI355 with 35× gen-over-gen gains and doubles down on open innovation – from ROCm 7 to Helios infrastructure – to challenge NVIDIA’s AI leadership.
The deal marks a strategic move to bolster Qualcomm’s AI and custom silicon capabilities amid challenging competition and the potential start of a wave of AI silicon acquisitions.
A new partnership combines WEKA’s AI-native storage with Nebius’ GPUaaS platform to accelerate model training, inference, and innovation with microsecond latency and extreme scalability.
As the battle for AI market share continues, AMD’s recent acquisitions signal a strategic move toward optimizing both software and hardware for inference workloads and real-world AI deployment.
The HPE-owned platform combines unified observability, smart alert correlation, and automation to tackle hybrid IT complexity while also working with existing monitoring tools.
AIStor’s stateless, gateway-free design solves legacy storage issues, enabling high-performance object-native infrastructure for exabyte-scale AI and analytics workloads.
Marvell is inking a deal for optical interconnect startup Celestial AI in a massive bet that the industry has shifted from being compute-constrained to bandwidth-constrained.
As AI training pushes data centers to unprecedented power densities, researchers reveal an affordable solution that lets computing thrive on fluctuating renewable energy.
In Part 1 of Voice of Innovation Matty Bakkeren’s 2026 predictions series, he explores how AI, power, cooling, and supply chains are reshaping data center infrastructure for a utility-scale future.
As up to 10 million jobs disappear and quality content moves behind paywalls, the question isn’t if AI will reshape society. It’s whether 2026 is the year we’ll control the burn or watch it spread.
New liquid cooling solutions have created a critical new system for data centers, and the company's “magic dust” additive packages are proving essential to keep AI running.
Intel veteran and Machani Robotics CSO/CTO Niv Sundaram, one of TechArena’s newest voices of innovation, talks emotionally intelligent AI, companion humanoids, and why real innovation starts and ends with human wellbeing.
Runpod head of engineering Brennen Smith joins a Data Insights episode to unpack GPU-dense clouds, hidden storage bottlenecks, and a “universal orchestrator” for long-running AI agents at scale.
From CPU orchestration to scaling efficiency in networks, leaders reveal how to assess your use case, leverage existing infrastructure, and productize AI instead of just experimenting.
From the OCP Global Summit, hear why 50% GPU utilization is a “civilization-level” problem, and why open standards are key to unlocking underutilized compute capacity.
In the Arena: Allyson Klein with Axelera CMO Alexis Crowell on inference-first AI silicon, a customer-driven SDK, and what recent tapeouts reveal about the roadmap.
In this episode of Data Insights, host Allyson Klein and co-host Jeniece Wnorowski sit down with Dr. Rohith Vangalla of Optum to discuss the future of AI in healthcare.
From OCP Summit, Metrum AI CEO Steen Graham unpacks multi-agent infrastructure, SSD-accelerated RAG, and the memory-to-storage shift—plus a 2026 roadmap to boost GPU utilization, uptime, and time-to-value.
Runpod head of engineering Brennen Smith joins a Data Insights episode to unpack GPU-dense clouds, hidden storage bottlenecks, and a “universal orchestrator” for long-running AI agents at scale.
From CPU orchestration to scaling efficiency in networks, leaders reveal how to assess your use case, leverage existing infrastructure, and productize AI instead of just experimenting.
From the OCP Global Summit, hear why 50% GPU utilization is a “civilization-level” problem, and why open standards are key to unlocking underutilized compute capacity.
In the Arena: Allyson Klein with Axelera CMO Alexis Crowell on inference-first AI silicon, a customer-driven SDK, and what recent tapeouts reveal about the roadmap.
In this episode of Data Insights, host Allyson Klein and co-host Jeniece Wnorowski sit down with Dr. Rohith Vangalla of Optum to discuss the future of AI in healthcare.
From OCP Summit, Metrum AI CEO Steen Graham unpacks multi-agent infrastructure, SSD-accelerated RAG, and the memory-to-storage shift—plus a 2026 roadmap to boost GPU utilization, uptime, and time-to-value.