
Cloud-Native 2025 by the Numbers: The Developer Tent Just Got Bigger
Cloud-native isn’t contracting—it’s climbing up the stack. The Cloud Native Computing Foundation’s (CNCF’s) latest State of Cloud Native Development—done in partnership with SlashData—shows the community expanding beyond traditional Kubernetes operators into a much wider slice of backend developers who may never touch cluster primitives directly. That shift explains why some dashboards show container/Kubernetes “usage” leveling off even as cloud-native grows overall: the interface is moving up a layer to internal developer platforms and opinionated tooling.
“Cloud-native is moving from being a tech stack to a cultural shift in how developers interact with infrastructure,” said Bob Killen, senior technical program manager at CNCF. “It’s about empowering teams to build on top of a flexible, standardized foundation, not just running workloads in containers.”
What the Data Says
CNCF and SlashData estimate 15.6 million developers now qualify as cloud native, about 32% of the global developer population, with roughly 9.3 million in the traditional backend segment. Among developers who work on backend services, 56% are cloud native in Q3 2025—up from 49% in Q1 2025. Hybrid-cloud deployments climbed from 22% in early 2021 to 30% in Q3 2025, and multi-cloud sits at 23%. Meanwhile, only 41% of professional machine learning/artificial intelligence (ML/AI) developers identify as cloud native—likely because many consume AI via managed endpoints that abstract away the stack.
Why “Cloud-Native Without Kubernetes” Makes Sense
Killen described the pattern plainly in our interview: many backend developers now deploy through internal platforms like Backstage and other dev-portal tools rather than touching containers or Kubernetes directly. That doesn’t reduce the relevance of Kubernetes—it elevates it and makes it even more accessable. Teams “build once” to Kubernetes and point workloads to wherever capacity and cost line up, on-prem or cloud, without re-plumbing their developer workflow. This is the portability dividend the ecosystem bet on a decade ago.
“While AI/ML developers have infrastructure-heavy workloads, many don’t identify as cloud-native developers because they’re interacting with the infrastructure through abstracted layers like managed endpoints,” he said.
AI is Pushing Hybrid and Multi—Just Not Always Visibly
Hybrid-cloud’s steady rise isn’t a fashion cycle; it’s economics and capacity. GPU availability, compliance posture, and data-gravity considerations favor a mixed estate: local clusters for steady-state workloads, burst capacity in public clouds when queues spike, and selective use of specialized GPU instances for inference. The report’s trendline from 22% hybrid in 2021 to 30% in 2025 tracks what we hear from platform teams: design for flexibility first, then optimize per workload.
Inside the Tech Radar: What Developers Would Adopt Today
The CNCF and SlashData Tech Radar Report, which surveys what tools developers are actually using and recommending, points to a few emerging patterns:
- AI inference engines and tools: NVIDIA Triton, DeepSpeed, TensorFlow Serving, and BentoML are placed in the adopt position, with Triton leading both maturity and usefulness in developer ratings. That’s consistent with what we see in production: a bias toward stable, vendor-backed or widely used inference stacks for latency-sensitive workloads.
- Agentic AI platforms: Model Context Protocol (MCP) and Llama Stack land in adopt. MCP leads on maturity and usefulness in the Radar data. For shops experimenting with agent-to-agent workflows and tool invocation, this suggests a near-term path that doesn’t require inventing a framework from scratch.
- ML orchestration: Airflow and Metaflow rise to adopt. Metaflow scores highest on maturity, while Airflow tops usefulness—pragmatic choices for teams bridging data engineering legacies with modern model pipelines.
What This Means
Here are a few observations from the CNCF/SlashData State of Cloud Native Development report:
1. Design Attention Is Moving to the Portal Layer
With 77% of backend developers using at least one cloud-native technology while many don’t identify as “Kubernetes users,” the center of gravity appears to be shifting toward internal developer platforms. Cost, performance, and security signals are increasingly surfaced in portals rather than in cluster-level tools.
2. Hybrid/Multi Is Becoming a Steady State
The report shows hybrid usage at 32% and multi-cloud at 26% among backend developers, with distributed cloud at 15%. Taken together, those shares suggest multi-venue deployment is becoming routine rather than exceptional, with Kubernetes serving as the portability layer across environments.
3. AI Plumbing Is Consolidating Around a Few Stacks
Many AI teams still consume managed endpoints, but the Technology Radar highlights a narrowing set of building blocks: Triton/DeepSpeed/TF Serving/BentoML for inference, MCP/Llama Stack for agentic scaffolding, and Airflow/Metaflow for orchestration. The pattern suggests a pragmatic core is emerging inside otherwise varied AI pipelines.
Why AI Developers Under-Index on “Cloud Native”
Only 41% of professional AI/ML developers are counted as cloud native in the study. That doesn’t mean they aren’t running on cloud-native infrastructure; it means consumption is often through higher-level SaaS or managed services where the platform owns the runtime. As more teams bring inference and retrieval closer to their data for cost, latency, or privacy, expect that percentage to rise—especially as internal developer platforms (IDPs) make “cloud-native-by-default” the path of least resistance.
What to Watch Next
Two mechanics will shape 2026 roadmaps.
- IDP gravity. The more you can abstract cluster ops into templates and policies inside your portal, the faster you onboard non-specialist developers into cloud-native patterns without cognitive overload. Expect continued growth in dev-portal plug-ins that expose cost and performance insights contextually rather than via separate FinOps dashboards. The report’s “big-tent” framing for backend devs is an early indicator.
- Agentic patterns meeting sitereliability engineering (SRE) guardrails. As teams test agent-to-agent workflows, governance will hinge on policy scopes, action audits, and clear escalation paths. The Tech Radar’s adopt signals around MCP and Llama Stack suggest a shared vocabulary is arriving; pair that with your existing change-management controls before letting anything write to production.
TechArena Take
Cloud-native isn’t fading; it’s moving up the stack. The center of gravity appears to be shifting from cluster primitives to internal developer platforms. Kubernetes continues to function as the portability layer, while more developers interact through portals and opinionated tools rather than directly with containers.
Hybrid and multi-cloud usage looks less like an edge case and more like standard operating context. The data suggests routine use of multiple execution venues as organizations balance capacity, cost, and locality considerations over time.
Developer sentiment around inference engines (e.g., Triton, DeepSpeed, TensorFlow Serving, BentoML), agentic scaffolding (MCP, Llama Stack), and orchestration (Airflow, Metaflow) points to a pragmatic core of components coalescing inside otherwise diverse AI pipelines.
Across interviews and releases, “agentic SRE” is taking shape as a layered pattern: explain-and-observe capabilities first, human-reviewed changes next, and policy-scoped autonomy for recurring fixes. Notable strides include transparent reasoning, auditable actions, and domain-scoped agents aimed at reducing error surface.
Two advancements stand out: platform-level immutability for backups that treats ransomware recovery as table stakes, and live container migration aimed at maintaining long jobs on ephemeral capacity. Both represent meaningful steps toward reliability at fleet scale without sacrificing economics.
Cloud-native isn’t contracting—it’s climbing up the stack. The Cloud Native Computing Foundation’s (CNCF’s) latest State of Cloud Native Development—done in partnership with SlashData—shows the community expanding beyond traditional Kubernetes operators into a much wider slice of backend developers who may never touch cluster primitives directly. That shift explains why some dashboards show container/Kubernetes “usage” leveling off even as cloud-native grows overall: the interface is moving up a layer to internal developer platforms and opinionated tooling.
“Cloud-native is moving from being a tech stack to a cultural shift in how developers interact with infrastructure,” said Bob Killen, senior technical program manager at CNCF. “It’s about empowering teams to build on top of a flexible, standardized foundation, not just running workloads in containers.”
What the Data Says
CNCF and SlashData estimate 15.6 million developers now qualify as cloud native, about 32% of the global developer population, with roughly 9.3 million in the traditional backend segment. Among developers who work on backend services, 56% are cloud native in Q3 2025—up from 49% in Q1 2025. Hybrid-cloud deployments climbed from 22% in early 2021 to 30% in Q3 2025, and multi-cloud sits at 23%. Meanwhile, only 41% of professional machine learning/artificial intelligence (ML/AI) developers identify as cloud native—likely because many consume AI via managed endpoints that abstract away the stack.
Why “Cloud-Native Without Kubernetes” Makes Sense
Killen described the pattern plainly in our interview: many backend developers now deploy through internal platforms like Backstage and other dev-portal tools rather than touching containers or Kubernetes directly. That doesn’t reduce the relevance of Kubernetes—it elevates it and makes it even more accessable. Teams “build once” to Kubernetes and point workloads to wherever capacity and cost line up, on-prem or cloud, without re-plumbing their developer workflow. This is the portability dividend the ecosystem bet on a decade ago.
“While AI/ML developers have infrastructure-heavy workloads, many don’t identify as cloud-native developers because they’re interacting with the infrastructure through abstracted layers like managed endpoints,” he said.
AI is Pushing Hybrid and Multi—Just Not Always Visibly
Hybrid-cloud’s steady rise isn’t a fashion cycle; it’s economics and capacity. GPU availability, compliance posture, and data-gravity considerations favor a mixed estate: local clusters for steady-state workloads, burst capacity in public clouds when queues spike, and selective use of specialized GPU instances for inference. The report’s trendline from 22% hybrid in 2021 to 30% in 2025 tracks what we hear from platform teams: design for flexibility first, then optimize per workload.
Inside the Tech Radar: What Developers Would Adopt Today
The CNCF and SlashData Tech Radar Report, which surveys what tools developers are actually using and recommending, points to a few emerging patterns:
- AI inference engines and tools: NVIDIA Triton, DeepSpeed, TensorFlow Serving, and BentoML are placed in the adopt position, with Triton leading both maturity and usefulness in developer ratings. That’s consistent with what we see in production: a bias toward stable, vendor-backed or widely used inference stacks for latency-sensitive workloads.
- Agentic AI platforms: Model Context Protocol (MCP) and Llama Stack land in adopt. MCP leads on maturity and usefulness in the Radar data. For shops experimenting with agent-to-agent workflows and tool invocation, this suggests a near-term path that doesn’t require inventing a framework from scratch.
- ML orchestration: Airflow and Metaflow rise to adopt. Metaflow scores highest on maturity, while Airflow tops usefulness—pragmatic choices for teams bridging data engineering legacies with modern model pipelines.
What This Means
Here are a few observations from the CNCF/SlashData State of Cloud Native Development report:
1. Design Attention Is Moving to the Portal Layer
With 77% of backend developers using at least one cloud-native technology while many don’t identify as “Kubernetes users,” the center of gravity appears to be shifting toward internal developer platforms. Cost, performance, and security signals are increasingly surfaced in portals rather than in cluster-level tools.
2. Hybrid/Multi Is Becoming a Steady State
The report shows hybrid usage at 32% and multi-cloud at 26% among backend developers, with distributed cloud at 15%. Taken together, those shares suggest multi-venue deployment is becoming routine rather than exceptional, with Kubernetes serving as the portability layer across environments.
3. AI Plumbing Is Consolidating Around a Few Stacks
Many AI teams still consume managed endpoints, but the Technology Radar highlights a narrowing set of building blocks: Triton/DeepSpeed/TF Serving/BentoML for inference, MCP/Llama Stack for agentic scaffolding, and Airflow/Metaflow for orchestration. The pattern suggests a pragmatic core is emerging inside otherwise varied AI pipelines.
Why AI Developers Under-Index on “Cloud Native”
Only 41% of professional AI/ML developers are counted as cloud native in the study. That doesn’t mean they aren’t running on cloud-native infrastructure; it means consumption is often through higher-level SaaS or managed services where the platform owns the runtime. As more teams bring inference and retrieval closer to their data for cost, latency, or privacy, expect that percentage to rise—especially as internal developer platforms (IDPs) make “cloud-native-by-default” the path of least resistance.
What to Watch Next
Two mechanics will shape 2026 roadmaps.
- IDP gravity. The more you can abstract cluster ops into templates and policies inside your portal, the faster you onboard non-specialist developers into cloud-native patterns without cognitive overload. Expect continued growth in dev-portal plug-ins that expose cost and performance insights contextually rather than via separate FinOps dashboards. The report’s “big-tent” framing for backend devs is an early indicator.
- Agentic patterns meeting sitereliability engineering (SRE) guardrails. As teams test agent-to-agent workflows, governance will hinge on policy scopes, action audits, and clear escalation paths. The Tech Radar’s adopt signals around MCP and Llama Stack suggest a shared vocabulary is arriving; pair that with your existing change-management controls before letting anything write to production.
TechArena Take
Cloud-native isn’t fading; it’s moving up the stack. The center of gravity appears to be shifting from cluster primitives to internal developer platforms. Kubernetes continues to function as the portability layer, while more developers interact through portals and opinionated tools rather than directly with containers.
Hybrid and multi-cloud usage looks less like an edge case and more like standard operating context. The data suggests routine use of multiple execution venues as organizations balance capacity, cost, and locality considerations over time.
Developer sentiment around inference engines (e.g., Triton, DeepSpeed, TensorFlow Serving, BentoML), agentic scaffolding (MCP, Llama Stack), and orchestration (Airflow, Metaflow) points to a pragmatic core of components coalescing inside otherwise diverse AI pipelines.
Across interviews and releases, “agentic SRE” is taking shape as a layered pattern: explain-and-observe capabilities first, human-reviewed changes next, and policy-scoped autonomy for recurring fixes. Notable strides include transparent reasoning, auditable actions, and domain-scoped agents aimed at reducing error surface.
Two advancements stand out: platform-level immutability for backups that treats ransomware recovery as table stakes, and live container migration aimed at maintaining long jobs on ephemeral capacity. Both represent meaningful steps toward reliability at fleet scale without sacrificing economics.



