AI demand is tightening HDD and NAND supply—and prices may follow. VAST is betting on flash reclamation and KV-cache persistence as storage starts acting more like memory.
RackRenew remanufactures OCP compliant-infrastructure into certified, warranty-backed assemblies—standardized, tested, and ready to deploy for faster capacity without bespoke engineering.
These data-infrastructure shifts will determine which enterprises scale AI in 2026, from real-time context and agentic guardrails to governance, efficiency, and a more resilient data foundation.
Arm’s OCP board seat and new FCSA spec push chiplet interoperability from idea to implementation—enabling mix-and-match silicon and smarter storage so teams can build AI without hyperscaler budgets.
Xeon 6 marries P-cores, E-cores, and scalable memory to feed data-hungry HPC workloads, eliminating bandwidth bottlenecks so spectral sims and other memory-bound codes can finally scale.
Open models move fast—but production doesn’t forgive surprises. Lynn Comp maps how to pair open-source AI with a solid CPU foundation and orchestration to scale from pilot to platform.
From #OCPSummit25, this Data Insights episode unpacks how RackRenew remanufactures OCP-compliant racks, servers, networking, power, and storage—turning hyperscaler discards into ready-to-deploy capacity.
Midas Immersion Cooling CEO Scott Sickmiller joins a Data Insights episode at OCP 2025 to demystify single-phase immersion, natural vs. forced convection, and what it takes to do liquid cooling at AI scale.
From hyperscale direct-to-chip to micron-level realities: Darren Burgess (Castrol) explains dielectric fluids, additive packs, particle risks, and how OCP standards keep large deployments on track.
Recorded live at OCP in San Jose, Allyson Klein talks with CESQ’s Lesya Dymyd about hybrid quantum-classical computing, the new Maison du Quantique, and how real-world use cases may emerge over the next 5–7 years.
From OCP Summit San Jose, Allyson Klein and co-host Jeniece Wnorowski interview Dr. Andrew Chien (UChicago & Argonne) on grid interconnects, rack-scale standards, and how openness speeds innovation.
Ventiva CEO Carl Schlachte joins Allyson Klein to share how the company’s Ionic Cooling Engine is transforming laptops, servers, and beyond with silent, modular airflow.
TechArena host Allyson Klein chats with Microsoft’s Vice President of Azure AI and HPC Infrastructure, Nidhi Chappell, in advance of Microsoft Build 2024. Nidhi shares how her organization is accelerating deployments of critical technology to fuel the insatiable demand for AI around the world and how Microsoft’s AI tools including co-pilot, Open AI and more have been met with overwhelming engagement from developers. She also talks about Microsoft’s silicon plans and strategic collaborations with NVIDIA and AMD.
TechArena host Allyson Klein chats with Research Institute of Sweden’s Jon Summers about the latest research his team has conducted on efficient infrastructure and data center buildout in the wake of massive data center growth for the AI era.
TechArena host Allyson Klein chats with Palo Alto Electron CEO Jawad Nasrullah about his vision for an open chiplet economy, the semiconductor manufacturing hurdles standing in the way of broad chiplet market delivery, and how he plans to play a role in shaping this next evolution of the semiconductor landscape.
TechArena host Allyson Klein chats with OCP’s Raul Alvarez on his new charter accelerating growth of the data center market in Europe as well as his ongoing work in immersion cooling technologies from OCP Lisbon 2024.
TechArena host Allyson Klein and Solidigm’s Jeniece Wnorowski chat with Weka’s Joel Kaufman, as he tours the Weka data platform and how the company’s innovation provides sustainable data management that scales for the AI era.
TechArena host Allyson Klein chats with PLVision Director of Open Networking Solutions and Strategy, Taras Chornyi, about the progress of SONIC and open network infrastructure for the AI Era.