Discover all the highlights from OCP > VIEW our coverage
X

KubeCon Atlanta: CNCF Sets AI Standard for Kubernetes

November 11, 2025

The energy was palpable across the Georgia World Congress Center in Atlanta this morning as 9,000 people gathered for the 10th annual KubeCon + CloudNativeCon convention, where the Linux Foundation announced a brand new Kubernetes AI Conformance program, a community-driven certification aimed at making AI workloads portable and interoperable across Kubernetes platforms.

The opening keynotes drew a packed house and delivered a clear message: the next decade of cloud native will be defined by how well this community standardizes AI at scale.

It’s a fitting inflection point. This year marks the 10-year anniversary of the Cloud Native Computing Foundation (CNCF), and the program’s journey from a handful of seed projects to a global, high-velocity ecosystem is the backdrop for what comes next.

How We Got Here

The CNCF launched in 2015 under the Linux Foundation to steward a new operational model built around containers, orchestration, and declarative automation. The first CNCF Board meeting took place that December at The New York Times offices, and by March 2016, the Technical Oversight Committee had formally accepted Kubernetes as the foundation’s first project. Ten years later, the numbers tell the story: nearly 300,000 contributors across 190 countries have pushed 18.8 million contributions into more than 230 projects. The once-compact cloud native landscape now spans everything from core orchestration to observability, service meshes, security, data, and developer experience.

That community scale shows up in the audience, too: roughly 48 percent of attendees are first-timers, a reminder that cloud native keeps onboarding new builders even as it professionalizes.

Where We Are Today

The membership base has swollen from 22 founding organizations to more than 700 member companies—platinum and gold vendors, a deep bench of silver members, and a growing cadre of end-user organizations that help steer real-world priorities. A new platinum end user, CVS Health, was announced on stage, underscoring how cloud native has moved well beyond hyperscale tech firms into heavily regulated, mission-critical industries.

AI Continues to Grow at Staggering Scale

“The two most significant trends are merging right now—cloud native and AI are not separate technology trends; they are really coming together,” said Jonathan Bryce, executive director of cloud + infrastructure at the Linux Foundation.

That was the through-line from the main stage this morning. CNCF leaders framed AI in three layers—training, inference, and applications/agents—and called out inference as the near-term hotspot. The scale is staggering: Google said its systems jumped from about 980 trillion tokens per month to roughly 1.33 quadrillion tokens per month in just a few months, and every large enterprise is now under pressure to stand up reliable, cost-efficient AI services—not just proof-of-concepts.  

Major Next Step: Kubernetes AI Conformance

To meet that moment, CNCF introduced the Kubernetes AI Conformance program, a community-driven certification aimed at making AI workloads portable and interoperable across Kubernetes platforms. Platforms that earn AI Conformance are expected to meet concrete requirements across six pillars:

Accelerators: hardware abstraction and scheduling for GPUs/TPUs and other accelerators (built on capabilities like Dynamic Resource Allocation, which graduated to GA in Kubernetes 1.34).

  • Networking: reliable, policy-aware connectivity for AI services.
  • Security: supply-chain and runtime controls suitable for production use.
  • Scheduling: predictable placement and scaling for AI workloads, including fractional or multi-node accelerator allocation where applicable.
  • Observability: standardized metrics and traces for models and accelerators.
  • Operators: lifecycle automation patterns that make AI stacks manageable.

Why It Matters

The original Kubernetes Conformance program is one of the quiet reasons cloud native scaled: it gave buyers confidence that distributions wouldn’t drift and that workloads would behave predictably across environments. AI needs the same discipline. Without it, teams get trapped in bespoke integrations, vendor-specific quirks, and fragile pipelines that are hard to operate at scale.

A live demo on stage walked through what an AI-conformant cluster looks like in practice: using DRA to discover accelerators and define resource plans; deploying a vision-language model; scraping model metrics; autoscaling via custom metrics; and exposing accelerator telemetry such as utilization and temperature. The point wasn’t the specific model—it was the proof that a consistent, open set of platform guarantees shortens the path from “it runs” to “it operates.”

Who’s In the Newly Formed AI-Kubernetes Club

Initial participants shown on the keynote logo wall include hyperscalers, enterprise platforms, and AI infrastructure providers such as Google Cloud, Microsoft Azure, AWS, NVIDIA, Red Hat, Oracle, SUSE, SAP, Akamai, Alibaba Cloud, Broadcom, CoreWeave, DaoCloud, and Kubermatic, among others. Expect that roster to grow quickly as vendors align their roadmaps and customers start asking for the badge.

How It Accelerates Adoption

By defining a minimum common denominator for accelerators, security, scheduling, observability, and operators, AI Conformance gives builders a stable target and gives organizations a portable operating model. Vendors can innovate above the line; users get fewer surprises when they move from lab to production or from one environment to another. It’s exactly the kind of boring, essential plumbing that lets the more exciting parts of AI—faster models, better retrieval, smarter agents—ship without reinventing the platform every time.

The Bigger Frame: Cloud Native and AI Are Merging

CNCF’s latest developer data puts the cloud-native population at 15.6 million, with nearly half already building AI systems. That overlap explains the energy in Atlanta: the community that figured out how to run the internet reliably now wants to make AI equally routine. The early signal is that Kubernetes will be the common substrate for AI not only because it’s ubiquitous, but because conformance programs like this one make it predictable.

TechArena Take

Standards are how ecosystems scale. Kubernetes AI Conformance is CNCF replaying a proven playbook at precisely the right layer of the stack. It won’t pick winners for model servers, vector databases, or agent frameworks—and it shouldn’t. Instead, it sets a floor for what every platform must guarantee so AI teams can move faster without stapling together one-off integrations for each environment.

Three implications to watch:

  • Procurement and platform strategy: expect AI Conformance to show up in RFPs. If you sell or operate Kubernetes at scale, your roadmap just got a new column.
  • Inference operations maturity: the demo emphasis on DRA, custom metrics, autoscaling, and accelerator telemetry is a tell—production AI is an operational problem first.
  • Portability pressure: as more providers certify, the switching cost narrative weakens. That’s healthy competitive pressure and a boon for enterprises trying to avoid lock-in.

Keep following TechArena.ai this week for updates and news from KubeCon + CloudNativeCon in Atlanta.

Subscribe to our newsletter

The energy was palpable across the Georgia World Congress Center in Atlanta this morning as 9,000 people gathered for the 10th annual KubeCon + CloudNativeCon convention, where the Linux Foundation announced a brand new Kubernetes AI Conformance program, a community-driven certification aimed at making AI workloads portable and interoperable across Kubernetes platforms.

The opening keynotes drew a packed house and delivered a clear message: the next decade of cloud native will be defined by how well this community standardizes AI at scale.

It’s a fitting inflection point. This year marks the 10-year anniversary of the Cloud Native Computing Foundation (CNCF), and the program’s journey from a handful of seed projects to a global, high-velocity ecosystem is the backdrop for what comes next.

How We Got Here

The CNCF launched in 2015 under the Linux Foundation to steward a new operational model built around containers, orchestration, and declarative automation. The first CNCF Board meeting took place that December at The New York Times offices, and by March 2016, the Technical Oversight Committee had formally accepted Kubernetes as the foundation’s first project. Ten years later, the numbers tell the story: nearly 300,000 contributors across 190 countries have pushed 18.8 million contributions into more than 230 projects. The once-compact cloud native landscape now spans everything from core orchestration to observability, service meshes, security, data, and developer experience.

That community scale shows up in the audience, too: roughly 48 percent of attendees are first-timers, a reminder that cloud native keeps onboarding new builders even as it professionalizes.

Where We Are Today

The membership base has swollen from 22 founding organizations to more than 700 member companies—platinum and gold vendors, a deep bench of silver members, and a growing cadre of end-user organizations that help steer real-world priorities. A new platinum end user, CVS Health, was announced on stage, underscoring how cloud native has moved well beyond hyperscale tech firms into heavily regulated, mission-critical industries.

AI Continues to Grow at Staggering Scale

“The two most significant trends are merging right now—cloud native and AI are not separate technology trends; they are really coming together,” said Jonathan Bryce, executive director of cloud + infrastructure at the Linux Foundation.

That was the through-line from the main stage this morning. CNCF leaders framed AI in three layers—training, inference, and applications/agents—and called out inference as the near-term hotspot. The scale is staggering: Google said its systems jumped from about 980 trillion tokens per month to roughly 1.33 quadrillion tokens per month in just a few months, and every large enterprise is now under pressure to stand up reliable, cost-efficient AI services—not just proof-of-concepts.  

Major Next Step: Kubernetes AI Conformance

To meet that moment, CNCF introduced the Kubernetes AI Conformance program, a community-driven certification aimed at making AI workloads portable and interoperable across Kubernetes platforms. Platforms that earn AI Conformance are expected to meet concrete requirements across six pillars:

Accelerators: hardware abstraction and scheduling for GPUs/TPUs and other accelerators (built on capabilities like Dynamic Resource Allocation, which graduated to GA in Kubernetes 1.34).

  • Networking: reliable, policy-aware connectivity for AI services.
  • Security: supply-chain and runtime controls suitable for production use.
  • Scheduling: predictable placement and scaling for AI workloads, including fractional or multi-node accelerator allocation where applicable.
  • Observability: standardized metrics and traces for models and accelerators.
  • Operators: lifecycle automation patterns that make AI stacks manageable.

Why It Matters

The original Kubernetes Conformance program is one of the quiet reasons cloud native scaled: it gave buyers confidence that distributions wouldn’t drift and that workloads would behave predictably across environments. AI needs the same discipline. Without it, teams get trapped in bespoke integrations, vendor-specific quirks, and fragile pipelines that are hard to operate at scale.

A live demo on stage walked through what an AI-conformant cluster looks like in practice: using DRA to discover accelerators and define resource plans; deploying a vision-language model; scraping model metrics; autoscaling via custom metrics; and exposing accelerator telemetry such as utilization and temperature. The point wasn’t the specific model—it was the proof that a consistent, open set of platform guarantees shortens the path from “it runs” to “it operates.”

Who’s In the Newly Formed AI-Kubernetes Club

Initial participants shown on the keynote logo wall include hyperscalers, enterprise platforms, and AI infrastructure providers such as Google Cloud, Microsoft Azure, AWS, NVIDIA, Red Hat, Oracle, SUSE, SAP, Akamai, Alibaba Cloud, Broadcom, CoreWeave, DaoCloud, and Kubermatic, among others. Expect that roster to grow quickly as vendors align their roadmaps and customers start asking for the badge.

How It Accelerates Adoption

By defining a minimum common denominator for accelerators, security, scheduling, observability, and operators, AI Conformance gives builders a stable target and gives organizations a portable operating model. Vendors can innovate above the line; users get fewer surprises when they move from lab to production or from one environment to another. It’s exactly the kind of boring, essential plumbing that lets the more exciting parts of AI—faster models, better retrieval, smarter agents—ship without reinventing the platform every time.

The Bigger Frame: Cloud Native and AI Are Merging

CNCF’s latest developer data puts the cloud-native population at 15.6 million, with nearly half already building AI systems. That overlap explains the energy in Atlanta: the community that figured out how to run the internet reliably now wants to make AI equally routine. The early signal is that Kubernetes will be the common substrate for AI not only because it’s ubiquitous, but because conformance programs like this one make it predictable.

TechArena Take

Standards are how ecosystems scale. Kubernetes AI Conformance is CNCF replaying a proven playbook at precisely the right layer of the stack. It won’t pick winners for model servers, vector databases, or agent frameworks—and it shouldn’t. Instead, it sets a floor for what every platform must guarantee so AI teams can move faster without stapling together one-off integrations for each environment.

Three implications to watch:

  • Procurement and platform strategy: expect AI Conformance to show up in RFPs. If you sell or operate Kubernetes at scale, your roadmap just got a new column.
  • Inference operations maturity: the demo emphasis on DRA, custom metrics, autoscaling, and accelerator telemetry is a tell—production AI is an operational problem first.
  • Portability pressure: as more providers certify, the switching cost narrative weakens. That’s healthy competitive pressure and a boon for enterprises trying to avoid lock-in.

Keep following TechArena.ai this week for updates and news from KubeCon + CloudNativeCon in Atlanta.

Subscribe to our newsletter

Transcript

Subscribe to TechArena

Subscribe