At Commvault SHIFT, “ResOps” and AI resilience were framed as the next operating model for enterprises facing AI-driven threats and cloud sprawl, raising the bar for what “clean” recovery should mean.
AI is turning product development into a living, experiment-led system, where causal inference, data and automation form a feedback loop that learns from releases to build smarter products faster.
From WEKA’s memory grid and exabyte storage to 800G fabrics, liquid-cooled AI factories, edge clusters, and emerging quantum accelerators, SC25 proved HPC is now about end-to-end AI infrastructure.
Stepping into a new cybersecurity leadership role, the smartest first move isn’t a new tool or policy, but questions. Use these 15 to map risk, culture, and influence before you start changing anything.
At KubeCon + CloudNativeCon in Atlanta, Devtron, Komodor, and Dynatrace showed how AI is reshaping Kubernetes ops—from self-healing fleets + spot-friendly migration to AI observability + business ROI.
CNCF + SlashData’s latest report counts 15.6M cloud-native developers as IDPs pull backend teams into the fold; hybrid + multi-cloud rise with AI demand while inference stacks + agentic frameworks coalesce.
MLCommons launches MLPerf Automotive v0.5, the first standardized benchmark suite to measure real-world AI performance in safety-critical automotive applications.
From predicting sepsis before symptoms appear to enabling rural clinics to make specialist-level diagnoses, a privacy-first approach to AI in health care promises to transform lives.
Surveying 250 IT pros, we found 29% already run SSDs beyond performance tiers, 81% would migrate when TCO wins, and storage innovation is a top lever to free power and space across the data center.
PowerScale delivers unmatched performance and scale for AI-driven transformation, while 122TB drives reshape enterprise infrastructure, proving storage is AI’s competitive edge in today’s data era.
From Intel’s layoffs to stealth automation, AI is reshaping work at a pace that outstrips human adaptation—driving record stress, uneven gains, and a scramble to reskill before the next downturn hits.
Allyson Klein and Robert Blum of Lightwave Logic unpack how electro-optic polymers, paired with silicon photonics, lower power and boost density on the road to 400G-per-lane optics— with a 2027 volume ramp in sight.
Hedgehog CEO Marc Austin joins Data Insights to break down open-source, automated networking for AI clusters—cutting cost, avoiding lock-in, and keeping GPUs fed from training to inference.
From SC25 in St. Louis, Nebius shares how its neocloud, Token Factory PaaS, and supercomputer-class infrastructure are reshaping AI workloads, enterprise adoption, and efficiency at hyperscale.
Runpod head of engineering Brennen Smith joins a Data Insights episode to unpack GPU-dense clouds, hidden storage bottlenecks, and a “universal orchestrator” for long-running AI agents at scale.
Billions of customer interactions during peak seasons expose critical network bottlenecks, which is why critical infrastructure decisions must happen before you write a single line of code.
Recorded at #OCPSummit25, Allyson Klein and Jeniece Wnorowski sit down with Giga Computing’s Chen Lee to unpack GIGAPOD and GPM, DLC/immersion cooling, regional assembly, and the pivot to inference.
Durgesh Srivastava unpacks a data-loop approach that powers reliable edge inference, captures anomalies, and encodes technician know-how so robots weld, inspect, and recover like seasoned operators.
Hedgehog CEO Marc Austin joins Data Insights to break down open-source, automated networking for AI clusters—cutting cost, avoiding lock-in, and keeping GPUs fed from training to inference.
From SC25 in St. Louis, Nebius shares how its neocloud, Token Factory PaaS, and supercomputer-class infrastructure are reshaping AI workloads, enterprise adoption, and efficiency at hyperscale.
Runpod head of engineering Brennen Smith joins a Data Insights episode to unpack GPU-dense clouds, hidden storage bottlenecks, and a “universal orchestrator” for long-running AI agents at scale.
Billions of customer interactions during peak seasons expose critical network bottlenecks, which is why critical infrastructure decisions must happen before you write a single line of code.
Recorded at #OCPSummit25, Allyson Klein and Jeniece Wnorowski sit down with Giga Computing’s Chen Lee to unpack GIGAPOD and GPM, DLC/immersion cooling, regional assembly, and the pivot to inference.
Durgesh Srivastava unpacks a data-loop approach that powers reliable edge inference, captures anomalies, and encodes technician know-how so robots weld, inspect, and recover like seasoned operators.