
An enterprise AI assistant generated a polished recommendation in under two seconds. In practice, teams could still spend the next 40 minutes verifying whether that output is safe to use.
That delay was not caused by model latency or infrastructure failure. It came from the work that followed. Teams had to reconstruct where the answer came from, check whether the underlying records were still current, confirm that no restricted content had slipped into the response, and make sure the recommendation could be explained if compliance or operations asked questions later.
The model was fast. The trust was slow.
That gap is becoming one of the most important operational realities in enterprise AI, yet most organizations do not measure it directly.
For years, data platform metrics were built for systems humans read. Uptime showed whether the platform was available. Latency showed whether queries returned quickly. Throughput showed whether pipelines could keep up. Freshness showed whether the underlying data was recent. These metrics still matter, but they only tell us whether a system is functioning. They do not tell us whether its output can be acted on with confidence.
AI systems changed that equation. They do not just present information for a person to interpret. They summarize, recommend, rank, and influence decisions directly. In that environment, the more important question is no longer only how fast an answer arrives. It is how quickly that answer becomes trustworthy enough to use.
I think of that interval as time-to-trust: the time between an AI output being generated and that output becoming trustworthy enough to act on.
In practice, that trust usually depends on four checks:
Traditional platform metrics were designed for an earlier operating model. In the dashboard era, a human analyst usually sat between the data and the decision. If something looked wrong, there was time to pause, investigate, cross-check, and add context before anyone acted.
AI compresses that distance.
An assistant can summarize customer history for a service representative. A copilot can suggest operational responses based on live events. A recommendation system can rank the next action for a relationship manager. In each case, the output is no longer passive. It enters a workflow quickly and creates pressure to move faster.
That is where the blind spot in traditional KPIs becomes obvious.
A system can have excellent uptime and still produce outputs no one is comfortable using. It can have low latency and still force teams into long validation loops. It can have fresh data and still fail because no one can explain how the answer was assembled or whether the response crossed a policy boundary.
The real delay is not always generation time. It is verification time.
Time-to-trust is not a reporting problem. It is an architectural one.
Organizations do not reduce trust delays by adding more dashboards after the fact. They reduce them by engineering systems that make verification faster and more reliable from the start. Low time-to-trust emerges from the design of the data platform, the context pipeline, and the runtime controls surrounding the model.
Consider the first trust question: where did this answer come from?
If lineage is incomplete, retrieval is opaque, or the output cannot be tied back to specific records or documents, that question becomes a manual investigation. Teams search logs, compare versions, and message multiple owners just to reconstruct provenance. What looks like an AI trust issue is really a metadata and observability issue.
Now consider the second question: is the context still current?
Many enterprise AI failures are not hallucinations in the usual sense. The model is often reasoning over information that is stale, incomplete, or out of sync with current policy and operations. If embedding refresh cycles are inconsistent, if context assembly is not versioned, or if source updates do not propagate cleanly, trust slows down because every output must be treated as potentially outdated.
The third question is policy. Is the output safe and appropriate to use?
That answer depends on runtime controls. If policy enforcement is scattered across prompts, informal conventions, and manual review, the burden falls back on the user to catch mistakes. But if the system includes policy-aware orchestration, redaction checks, scoped retrieval, and output controls, policy verification becomes faster because the platform has already narrowed the risk surface.
The fourth question is explainability. Can the organization reconstruct how the answer was assembled?
This is not about turning every AI interaction into a research paper. It is about having enough operational traceability to support real decisions. Which sources were retrieved? Which rules were applied? What version of context was used? Which guardrails were triggered? If those answers are available, trust moves faster. If they are missing, confidence slows to a crawl.
This is why time-to-trust belongs in the same conversation as lineage, ownership, freshness SLAs, metadata quality, contracts, and observability. It is not a soft metric. It is the visible outcome of infrastructure choices.
When enterprise AI pilots stall, the explanation is often framed in terms of model quality. Leaders say the responses were inconsistent. Users say the system felt unreliable. Technical teams say they need more tuning, better prompts, or a stronger model.
Sometimes that is true. Often it is incomplete.
In many organizations, the real problem is simpler: trusting the output takes too long.
The system performs well in demos because the environment is controlled. The documents are curated. The use case is narrow. The audience is forgiving. Once the system enters a live workflow, the real world shows up. Data sources evolve. Permissions vary. Records conflict. Policies change. Edge cases multiply. Suddenly every meaningful answer comes with follow-up questions the platform cannot answer quickly.
At that point, adoption weakens for understandable reasons. Users do not reject the system because they dislike AI. They reject it because trusting it takes too long.
A slow trust loop turns every output into a follow-up exercise. Frontline users stop relying on the assistant. Managers hesitate to embed it into core workflows. Risk teams demand tighter controls. Engineering teams spend more time defending outputs than improving them.
Enterprise AI often fails quietly this way. Not with a crash. Not with a scandal. With hesitation.
One reason this problem is easy to misdiagnose is that the symptom appears at the AI output layer while the root cause often lives in the data layer beneath it.
A recommendation is hard to trust because the source system has unclear ownership.
A summary is hard to trust because document refresh pipelines lag behind policy changes.
A generated response is hard to trust because the retrieval layer cannot show which source fragments were used.
A workflow suggestion is hard to trust because there is no contract defining which fields are authoritative and which are optional.
These are not model problems in the narrow sense. They are platform maturity problems.
For years, many data platforms could tolerate ambiguity. Business definitions drifted. Data products lacked clear ownership. Transformations accumulated without strong contracts. Reports still got delivered, and human analysts learned where the rough edges were. AI reduces the room for that kind of informal adaptation. When outputs are delivered directly into workflows, ambiguity becomes operational drag.
That is why time-to-trust is such a useful lens. Instead of asking only whether an AI response is impressive, it asks how much architectural friction surrounds that response before it can be used.
That is a far more revealing question.
Organizations do not need a large transformation program to begin. They can start with one workflow.
Pick a real AI-enabled use case that matters. It could be an internal copilot, a support assistant, an operations alerting system, or a retrieval-based knowledge tool. Then focus on the outputs that trigger the most scrutiny or require the highest confidence.
For those outputs, measure how long it takes to answer the four trust questions outlined above: provenance, currency, policy compliance, and explainability.
The time required to answer those questions is a practical proxy for time-to-trust.
What matters is not just the number. It is where the delay comes from.
In many environments, the biggest trust bottleneck is not policy review. It is provenance reconstruction. Teams discover they can answer whether an output is allowed faster than they can explain which records, documents, or retrieval steps produced it. That points directly to a lineage and observability gap, not a model gap.
Once those bottlenecks are visible, the work becomes concrete. Strengthen lineage capture. Version context pipelines. Clarify ownership. Add retrieval traces. Tighten contracts around critical fields. The goal is not perfection. It is reducing the time required to move from output to confident action.
Enterprise leaders often ask whether their AI is accurate, safe, or ready for scale. Those are reasonable questions, but they are incomplete.
A more useful question is this: how long does it take before our AI outputs become trustworthy enough to use?
That question changes the conversation immediately.
It pushes teams beyond benchmark thinking and into operating discipline. It shifts focus from isolated model performance to the full system around the model. It makes trust concrete rather than abstract. And it creates a bridge between technical architecture and business adoption.
Accuracy matters. Speed matters. Cost matters. But in production environments, none of those alone determine whether AI becomes a reliable part of decision-making. Systems create value only when people and processes can use their outputs with confidence.
The next phase of enterprise AI will not be defined only by who produces the fastest answer. It will be shaped by who can make that answer trustworthy in the shortest time.
Because in the end, a system that responds instantly but takes forty minutes to trust is not really moving at AI speed at all.

Healthcare organizations are witnessing a transformation as AI agents, which are autonomous systems that reason, plan, and execute complex workflows, move from research labs into clinical settings. The demos are compelling. The potential is enormous. Yet there’s a gap that many organizations are discovering the hard way: the chasm between a successful pilot and sustainable production deployment. This gap isn’t about technology capability; it’s about governance.
The fundamental difference between agentic AI systems and traditional automation is: They operate with context-awareness, make decisions dynamically, and adapt to clinical realities in real-time. This autonomy is their strength, but it’s also what makes governance essential. A recent report found that 62% of healthcare leaders say fragmented data is blocking AI scaling. This fragmentation isn't just a technical problem; it's a governance problem. Agentic AI systems that work beautifully in isolated pilots often struggle when deployed across diverse clinical environments, data sources, and workflow patterns.
Here are three reasons why governance is the critical bridge that enables healthcare organizations to move from compelling AI demos to sustainable production value.
Healthcare operates under strict regulatory standards, particularly HIPAA, which mandates strict controls over Protected Health Information (PHI). When an AI agent makes a decision, whether it's flagging a critical lab result, generating a clinical note, or processing a prior authorization, that decision must be traceable, auditable, and correctable. Unlike traditional software that follows deterministic paths, agentic systems make probabilistic decisions based on context. Governance ensures these decisions remain within safe, clinically validated boundaries while controlling access to PHI through the principle of least privilege: agents only access the minimum data and tools necessary for their function.
To achieve this level of safety, a robust governance framework must provide:
Consider a clinical documentation agent that synthesizes patient data into progress notes. Without governance, it might miss critical context, generate conflicting recommendations, or access PHI beyond what's necessary, violating HIPAA's minimum necessary standard. With proper governance, the same agent operates within defined guardrails, maintains complete audit trails of PHI access, enforces least-privilege access controls, and ensures human oversight at critical decision points, turning a potential liability into a trusted clinical tool.
Healthcare workers are rightfully skeptical of new technology. Years of experience with rigid, rule-based systems have taught them that automation often creates more work, not less. Agentic AI can break this pattern, but only if clinicians understand how it works and trust its outputs.
To bridge this trust gap, governance frameworks must prioritize transparency through:
This transparency enables clinicians to make informed decisions about when to rely on the agent and when to override it. Organizations that invest in governance frameworks early find that their agents gain clinician trust and adoption more quickly, transforming skepticism into confidence.
The scale challenge in healthcare AI isn't just about technology; it's about governance. While agentic AI systems may excel in controlled pilot environments, they face significant hurdles when deployed across diverse clinical settings. Governance frameworks address this by establishing consistent standards, defining clear escalation mechanisms, and creating monitoring systems that detect performance degradation before it impacts care.
Administrative burden consumes an estimated $265 billion annually in healthcare. Consider two examples: First, properly governed authorization agents can process routine cases automatically while flagging complex cases for human review, reducing processing time by 50-70% without increasing error rates. Second, when properly governed, documentation agents can reduce charting time by 30-40% while improving documentation quality and completeness. These gains only materialize when governance frameworks enable reliable scale across diverse clinical environments, proving that governance isn't a barrier to value, but the pathway to achieving it.
The ADDM Model: A Lifecycle Approach to Governance
Building effective governance requires a comprehensive framework built on three core pillars: Security & Compliance (protecting PHI through HIPAA-compliant access controls, implementing least privilege for data and tools, encrypting PHI per Security Rule requirements, and maintaining audit logs), Value-Driven Impact (anchoring initiatives to clear business objectives), and Accuracy & Reliability (ensuring consistent, predictable performance).
The key is to integrate governance at every stage of your agent's journey. Think of it as a continuous cycle rather than a linear process—what we call the ADDM model (Analyze, Develop, Deploy, Manage). Start by analyzing whether AI is the right solution and conduct risk assessments including data privacy and security per NIST guidelines. During development, establish evaluation metrics, test with diverse scenarios, implement guardrails, and design access controls enforcing least privilege for data and tools. Before deployment, validate load capacity, complete user acceptance testing, establish human-in-the-loop checkpoints, and verify HIPAA Security Rule compliance including encryption, audit logging, and access controls. Once in production, manage through continuous monitoring, feedback collection, periodic access reviews to maintain least privilege, and iterative model updates.
This requires clear organizational structure with three tiers: strategic leadership to set vision and policy, operational teams to execute day-to-day compliance, and technical teams to monitor and validate agent performance. The most successful organizations treat governance not as a compliance burden, but as a strategic capability that enables innovation while managing risk.
Beyond Compliance: Governance as a Strategic Capability
Governance is the critical bridge that enables healthcare organizations to move from compelling AI demos to sustainable production value. It ensures safety and compliance in high-stakes decisions, builds trust through transparency, and enables scale across fragmented healthcare environments.
Without proper governance, organizations face systems that create new risks, clinician resistance, and regulatory issues requiring costly remediation.
Agentic AI represents a fundamental shift in how healthcare can leverage technology. These systems can augment human capabilities in ways that traditional automation cannot. But realizing this potential requires thoughtful governance. The organizations that succeed won't be those with the most advanced AI technology; they'll be those that recognize governance as a strategic capability, not a compliance burden.
The demos are compelling. The technology is ready. The question isn't whether agentic AI will transform healthcare; it's whether organizations will build the governance frameworks needed to make that transformation safe, trusted, and sustainable. Start building your governance framework today. The organizations that invest in governance early will be the ones that realize the full potential of agentic AI in healthcare.

A new computing paradigm is reshaping the foundations of technological innovation and industrial competitiveness. As AI adoption accelerates, researchers, academics, governments and policy makers, as well as forward-looking industry decision-makers are confronting the limitations of classical computing. Quantum computing, particularly when combined with high-performance computing (HPC) in hybrid systems and architectures, is gaining real momentum.
The expectation is that the economic impact will be significant. According to Qureca, global quantum research and development investments now surpass $55.7 billion, with projections placing the global quantum market at $106 billion by 2040. Finance, healthcare, manufacturing, energy, communications, defense, security, and space are among the sectors that can benefit from quantum advancements the first, as they rely on solving complex optimization, simulation, and high-risk problems where classical computing reaches its limits.
Although we are still far from demonstrating large-scale commercial value from quantum applications, organizations that want to lead quantum innovation in the near future should invest today in developing capabilities, building expertise, and positioning themselves as early adopters.
Data centers sit at the center of this transition. They provide the infrastructure and system architecture for convergent technologies, including AI, HPC, supercomputers, and early quantum applications, that will underpin future technological economies.
Quantum computing cannot operate in isolation. It must be integrated into broader computing environments. This positions data centers as convergence hubs where HPC, AI, supercomputers, and quantum and hybrid systems can coexist and interact. The industry is already seeing some examples of this evolution: developed internally by large companies as IBM Quantum System One, Google Quantum AI, initiatives supported by the governments like EuroHPC Joint Undertaking, or collaborative projects such as French electric company EDF partnering with quantum startups Quandela, Alice & Bob, and Pascal to optimize energy management. Hybrid architectures are being tested in specialized facilities that combine classical supercomputers with quantum processors. Data centers are developing “quantum-as-a-service” models, enabling enterprises to access quantum capabilities without owning dedicated hardware (e.g. Scaleway’s Quantum as a service). Energy-efficient infrastructure is becoming a differentiator, as quantum systems often require highly controlled environments. For data center operators, this shift represents an opportunity to evolve from infrastructure providers to strategic partners in innovation, delivering quantum capabilities alongside traditional services.
The trajectory of classical computing development is slowing. We observe mainly incremental improvements in architectures and systems, while significant performance gains become more energy-consuming and less cost-effective.
Quantum computing offers a fundamentally different approach. Rather than processing information sequentially, it uses quantum states to explore multiple possibilities simultaneously, with multiple possible outcomes. In the near term, it’s more likely than not that we must rely on hybrid technologies. Hybrid computing combines the strengths of HPC systems with quantum processors for optimization and simulation. This model allows companies to explore with emerging technologies and create value today while preparing for more advanced quantum capabilities tomorrow.
Current investments in quantum computing echo the early bets on cloud and artificial intelligence. Those were risky, uncertain projects at the time. They became transformative in the long run.
Governments are investing heavily. In 2025, the EU adopted a Quantum Strategy that leverages scientific excellence and R&D, quantum infrastructure, ecosystem strengthening, skills development, and the integration of sovereign quantum capabilities into space, security, and defense strategies and other sectors. The European Quantum Flagship, backed by a €1 billion budget, is accelerating the development of quantum technologies (quantum computing and simulation, sensing, metrology and quantum communication) across research, industry, and the public sector. In the United States, estimated investment in quantum exceeds $7 billion.
At the organizational level, early investment carries distinct strategic advantages. Early adopters build proprietary knowledge, secure scarce talent, and shape emerging ecosystems. When breakthroughs occur, these organizations have the ability to scale and adjust to market needs quickly. Companies that delay investing in quantum R&D risk technological dependency, loss of competitive advantage, and higher costs when quantum technologies do eventually enter the advanced technological readiness and commercialization phase.
The real differentiator lies in organizational capacity to be prepared for the emerging technologies. Companies investing in quantum today are building a comparative advantage through the capabilities and skills essential for the future. They are developing talent and expertise, either by training internal teams or partnering with quantum research institutions, startups and industries. They are investing in innovative projects that demand experimentation, long-term thinking, cross-functional collaboration, and often academia-industry partnerships and collaborative projects. These organizations are gaining strategic positioning in the ecosystem by working alongside startups, universities, and technology providers.
These capabilities extend well beyond quantum technologies. They help organizations adopt emerging technologies broadly and stay competitive and innovative over the long term.
Quantum and hybrid computing are no longer a distant prospect. They are rapidly becoming the foundation for how complex problems could be solved across finance, pharmaceuticals, energy, defense, security, and aerospace, among others. This is not simply about technological leadership. It is about long-term competitiveness.
Companies that explore and invest today in quantum and hybrid computing are building the expertise, partnerships, and innovation capacities that will define future market leaders. Data centers play a pivotal role, enabling the convergence of HPC, AI, and quantum and hybrid systems while making advanced computing accessible and scalable across industries.
Organizations that failed to embrace past technological breakthroughs early found themselves struggling to remain competitive in the long term. Quantum computing appears to follow a similar trajectory, with potentially even greater impact. In an environment where innovation is the primary driver of competitive advantage, investing in quantum and hybrid computing is becoming a strategic necessity.

The chips powering the next wave of AI infrastructure are getting harder to design. And the tools used to create them are starting to think for themselves.
Synopsys today announced a broad expansion of its collaboration with TSMC, spanning AI-powered EDA flows, silicon-proven IP across advanced and specialty nodes, and new design enablement for co-packaged optics. The announcement, timed to TSMC’s 2026 Technology Symposium in North America, covers TSMC’s 3nm and 2nm families along with A16 (with Super Power Rail) and A14. But the real headline sits in a single word that keeps surfacing across the semiconductor design world: agentic.
Synopsys is collaborating with TSMC on what it calls “agentic run assistance” inside its Fusion Compiler, targeting TSMC’s A14 process using the NanoFlex Pro architecture. In practice, that means the tool can now identify timing improvement opportunities at different stages of the design flow on its own, rather than waiting for an engineer to manually intervene at each checkpoint. The goal: better power, performance, and area results with fewer human-in-the-loop iterations.
This is a meaningful step beyond the optimization work Synopsys has done with its DSO.ai technology over the past several years. Where DSO.ai focused on tuning parameters within a defined design space, agentic run assistance implies the tool is making multi-stage decisions about where and when to act across the flow. AI-assisted physical verification in Synopsys IC Validator is also progressing, aimed at accelerating the identification and resolution of design rule violations for faster tapeout quality.
The multiphysics signoff portfolio is expanding in parallel. Synopsys RedHawk-SC for digital power integrity, Totem-SC for analog power integrity, and HFSS-IC Pro for electromagnetic extraction now span TSMC nodes from A16 through A14. Totem-SC provides ultrahigh-capacity analog power integrity signoff for large N2-based designs, while PathFinder-SC extends multi-die electrostatic discharge signoff coverage to N2. Cloud-based multiprocessor and GPU acceleration shortens turnaround for teams iterating across thermally constrained 3D assemblies.
Chip architectures are fragmenting by design. Multi-die systems built on advanced packaging let designers mix process nodes, integrate heterogeneous functions, and scale beyond the limits of a single monolithic die. Synopsys is leaning into that shift across both its EDA tools and its IP catalog.
The company’s 3DIC Compiler platform now supports TSMC’s CoWoS packaging technology at 5.5x reticle interposer sizes, a scale that tracks with the massive interposers shipping inside today’s flagship AI accelerators. As a unified exploration-to-signoff platform, 3DIC Compiler integrates with RedHawk-SC, RedHawk-SC Electrothermal, and Ansys HFSS software to deliver multiphysics analysis for thermal, power, and high-speed signal integrity in one environment.
On the IP side, Synopsys announced several firsts. Its UCIe IP ASIL B solution on TSMC’s N5A process is the only end-to-end IP of its kind designed for safety-critical automotive multi-die systems, a category that barely existed two years ago but is gaining real traction as automakers adopt chiplet architectures. The company also completed silicon bring-up of the industry’s first low-power M-PHY v6.0 IP on TSMC’s N2P process, pushing next-generation storage connectivity forward for smartphones and mobile applications. Across TSMC’s N5, N3P, and N2P processes, Synopsys achieved first-silicon milestones on PCIe 7.0, HBM4, 224G, DDR5 MRDIMM Gen2, LPDDR6/5X/5, UCIe 64G, and M-PHY v6.0 IP.
Perhaps the most forward-looking piece of the announcement is Synopsys’ multiphysics design enablement for COUPE, TSMC’s co-packaged optics platform. The enablement spans Ansys Zemax OpticStudio for optical path simulation, Ansys Lumerical for photonic device simulation, HFSS-IC Pro for electromagnetic extraction, and RedHawk-SC Electrothermal for thermal and electrical co-simulation.
Synopsys also introduced a 224G IP solution that supports co-packaged optical Ethernet and UALink, targeting the bandwidth demands of next-generation electro-optical systems in AI data centers.
When an EDA vendor starts building full simulation flows for a technology, it signals that commercialization is no longer theoretical. Co-packaged optics has been a conference-circuit favorite for years. Now it has a design tool chain.
Three threads in this announcement deserve attention beyond the product specifics.
First, the agentic language matters. The semiconductor industry is moving past AI-as-optimizer toward AI-as-collaborator in the design flow. Synopsys is not alone in this pursuit, but its depth of integration with TSMC’s most advanced nodes gives it a proving ground that few competitors can match. If agentic run assistance delivers measurable PPA gains on A14, expect the rest of the EDA ecosystem to accelerate their own autonomous workflow roadmaps.
Second, the Ansys acquisition is paying visible dividends. The multiphysics coverage in this announcement, spanning optical, electromagnetic, thermal, and electrical simulation, would not have been possible under one roof before the merger closed. That vertical integration from RTL to photonics simulation is becoming a genuine differentiator, particularly as chip designs grow more three-dimensional and multi-domain.
Third, the co-packaged optics enablement is a quiet signal worth watching. Bandwidth scaling in AI clusters is approaching the practical limits of electrical interconnects. The fact that Synopsys, TSMC, and the Ansys simulation stack are now aligned on COUPE design flows suggests the industry’s timeline for production co-packaged optics may be shorter than many assume. The 224G IP supporting both optical Ethernet and UALink adds a concrete building block to what has, until recently, been mostly a research narrative.
Taken together, this announcement reflects a broader truth about the AI infrastructure buildout: the tools that design AI chips are themselves becoming AI-driven, and the companies that control that feedback loop will shape how fast the next generation of silicon reaches production.

VAST Data announced today that it has closed a Series F financing round, including primary and secondary capital of approximately $1 billion. The financing round creates a $30 billion valuation for the AI operating system company, which more than triples its $9.1 billion Series E valuation from late 2023. The round was led by Drive Capital, with Access Industries as co-lead and participation from new and existing investors such as Fidelity Management & Research Company, NEA, and NVI DIA.
This impressive headline number has the support of strong reported underlying financials. VAST has surpassed $4 billion in cumulative bookings and ended its most recent fiscal year with more than $500 million in committed annual recurring revenue, alongside positive operating margin and free cash flow.
In a blog post accompanying the announcement, VAST Data Co-Founder Jeff Denworth said, “What excites investors about VAST is our unprecedented mix of growth and profitability, demonstrating to the world that a radically disruptive product and focused team can break fundamental business tradeoffs.” The company’s Rule of X score (calculated as the sum of revenue growth rate plus the last twelve months free cash flow margin) is 228%, a remarkable 5 times greater than the 40% typically considered healthy.
So why take the investment? Denworth cites two reasons:
VAST’s business story traces back to a technical decision made in 2016: designing a distributed systems architecture known as DASE (for disaggregated shared everything) from scratch, specifically for the parallelism demands of deep learning. From that foundation, VAST has created a full-stack computing platform for deep learning, including the VAST DataStore, DataBase, DataEngine, and DataSpace. Earlier this year, the company announced new capabilities for the agentic AI era to build a “thinking machine,” or a system that governs, evaluates, and improves on AI pipelines automatically. VAST reports that today the AI factories that it supports have over 1 billion CUDA cores or over 1 million tensor cores, all accessing a single VAST data platform.
As Denworth explicitly said, this funding round is a market signal. The company’s valuation, its Rule of X score, and its underlying financials are proof that the company’s vision can translate into reality. The DASE architecture bet, made in a seemingly distant past when Sam Altman and Elon Musk were working side-by-side at OpenAI, is now paying off 10 years on as enterprises discover that legacy data infrastructure simply cannot keep pace with agentic AI demands. The company seems to have arrived at this moment with exactly the right product.
The open question is what VAST Data’s competition can offer. As Denworth noted in his blog, the company operates in an odd place: while it competes with companies up and down the data stack, it has no direct analog competitor outside of hyperscalers that put together many services to create an equivalent to VAST’s unified platform. For now, that position is difficult to replicate quickly. Unified architectures are not assembled overnight, and VAST’s decade-long head start shows in both the product and the financials. The gap may not last forever, but VAST’s financial strength gives it the runway to keep widening it.
Read more from VAST in their press release.

The tech landscape is accelerating faster than traditional consulting was designed to handle. As trillions of dollars flood into AI infrastructure and organizations race to define their positions in a shifting ecosystem, the distance between strategy and execution has become one of the costliest gaps in business.
With the recent launch of the TechArena Advisory, we are featuring a series of 5 Fast Facts Q&As to highlight the operators bringing C-suite-grade intelligence to this new function. We recently sat down with our Founder and CEO, Allyson Klein, who built TechArena on a conviction that has only sharpened with time: in a world redefining what intelligence means, human connection still matters. In this edition of our Q&A series, she discusses the collapse of traditional consulting models, the most acute pressure points facing business leaders right now, and what it means to drive disproportionate growth for clients.
The pace changed. What used to be multi-year design cycles and simplified paths to market, has changed into a frenetic pace of innovation to serve the demand for AI. Organizations are racing to deploy trillions of dollars of capital equipment, making consequential decisions faster than at any point in history.
The pressure revealed that traditional consultancy models built on outside-in analysis were not designed for this moment. Outside-in frameworks with no clear integration path simply do not hold up when the stakes are this high and the clock is moving this fast. The Advisory practice is a direct response to that gap. We bring operators who have lived in these environments, made these calls, and steered the foundational companies that architected the modern tech stack.
I spent my career at the friction point where plans meet P&L, in some of the most demanding environments in tech. I drove data center and edge marketing at Intel and led marketing and communications at Micron. Both roles put me at the table where decisions were made, where go-to-market battles were won or lost, and where the story you told about your technology not only determined your product success, but the industry’s trajectory.
When I founded TechArena in 2022, I carried all of that forward. We have collaborated with over 100 leading technology companies, helping them claim market advantage in a landscape that was not waiting for anyone to catch up. Our work crystallized my thinking about what businesses actually need right now: operating experience from someone who has sat in your seat and can help you move forward with confidence.
The pressure is simultaneous and everywhere. Silicon design cycles are accelerating, and data center buildouts that once took five years are happening in 18 months. Leaders are being asked to get product strategy, competitive positioning, go-to-market, and financial governance right, all at once.
The executives I talk to are not short on ambition or know-how. They are short on the right kind of counsel, someone who has navigated this specific terrain at scale and can step in immediately, assess the situation, and turn potential into real business value. That kind of advisor changes the equation in ways that static analysis cannot replicate.
Go-to-market is probably the most acute pain point. Companies are launching products into markets that are still being defined, competing for mindshare with dozens of well-funded players, trying to build routes to market that did not exist two years ago. Getting that right matters enormously for where a company lands in the ecosystem hierarchy.
Competitive narrative is close behind. In a landscape where technical differentiation is hard to sustain, the story you tell about your position in the value chain can be the deciding factor in whether customers, partners, and investors align behind you.
Organizational readiness is moving up fast on the list too. Companies that scaled aggressively in recent years are now restructuring for AI-native operations. Leadership development and cultural transformation are real operational challenges, not soft-skills exercises, and that is an area where our advisors bring a depth of experience that is hard to find anywhere else.
The advisors we have brought together have grown multi-billion dollar businesses and led organizations through the defining technology inflections of the last two decades. I can’t wait to see the impact that these proven operators can deliver to drive disproportionate growth for our clients.
Early results are already proving the model. For example, Axelera AI came to us with a specific market opportunity. The Advisory team researched their position, helped frame the opportunity clearly, and delivered an action plan they could execute. That is exactly what we are built to deliver, and it is the standard every engagement will be measured against.
If the thought of accelerating your team’s ambitions resonates with you, come check us out.

Back in 2015, the “godfather of AI” Geoffrey Hinton made a bold prediction: stop training radiologists immediately, because deep learning would render them obsolete within five years. A decade on, this looks unlikely to happen any time soon, and radiologists remain in just as much demand, showing how important accuracy and safety remain and the unique challenges in adopting AI in this space.
My recent conversation with Tapan Shah, AI Architect at Innovaccer and Agentic AI Work Group Lead at the Coalition for Health AI (CHAI), and our Data Insights co-host Jeniece Wnorowski from Solidigm, shed light on some of the challenges in creating scalable AI systems for healthcare. His role involves creating AI systems and agents that work in actual healthcare environments and enterprise systems that affect patient and provider outcomes.
In Tapan’s view, the hardest problem in healthcare AI is not creating the right models or algorithms, but in designing from the ground up.
Tapan opened with an example that cuts to the heart of the challenge. An AI clinical note generator built for a cardiology practice may work great in a pilot and then stumble when deployed for other disciplines like oncology or orthopedics, or even a different practice running a different electronic health record (EHR) system. Even when the underlying model remains the same, the results can be vastly different based on the medical discipline.
“Scaling AI into enterprise healthcare is less of an AI problem and more of a system design problem,” Tapan said. “The real problem here is whether in real-world situations, an AI agent being developed has the right level of access and the capability to create sufficiently transparent and explainable recommendations that even a skeptical clinician can accept.”
In the past decade, the healthcare AI industry has undergone a seismic shift from building predictive models to building agents. Historically, validating an AI system was relatively straightforward: train a model, measure accuracy on a holdout set, and deploy. This has been successfully validated in cases like early tumor detection, says Tapan.
Agents are a fundamentally different beast. They pull data from multiple data sources, invoke various tools, and combine these inputs to perform complex tasks. Often there is no single source of truth and clinicians can interpret the same data differently. Data can be missing or certain users cannot access certain tools or software. In this scenario, the challenge becomes ensuring that the agent being built is safe and can handle the scenario safely and predictably even in a novel scenario.
And because sensitive data is being handled, safeguards need to be built in the system from the get-go. For instance, a cardiology clinical note generator should not have access to a patient’s psychiatric records.
When the topic turned to governance, Tapan pushed back against the assumption that governance is primarily about controls and restrictions.
“AI governance is not a constraint, it’s enablement,” he said, comparing a good governance framework to a constitution: it can be used as a binding document, or it can serve as the foundation for doing genuinely useful things, based on how you build and use it.
He illustrated this with a scenario where an authorization agent shifted from a 70% auto-approval rate to a 90% auto-approval rate. Effective governance would mean detecting this shift, reviewing the agent’s complete decision graph and identifying the root cause. A successful governance model would enable such decision making to be made in minutes, rather than weeks.
The thorniest issue in the conversation was accountability, especially as AI agents take on decisions with both clinical and administrative consequences. Tapan was candid: there is no perfect solution yet. Legal frameworks are still catching up to the question of what it means for an AI agent to make a consequential decision.
Innovaccer’s current approach is to make sure that there is comprehensive logging of every AI decision, granular access control for agents, and human oversight with the ability to override. For all clinical use cases, and many administrative ones, a human remains in the loop, able to review and reverse any AI-generated decision. As legal and governance frameworks evolve, these foundations will provide the structure to adapt.
When asked about measuring long-term strategic value, Tapan pointed to two holy grails: improved patient and provider outcomes. Treatment authorizations are a good example of where AI intervention can help, he explained.
“There are cases where it can take upwards of two to three weeks for a prior authorization for a procedure, that leads to delay in care,” he said. “If we can bring that down to, let’s say, a day, less than a day, even a few minutes, it actually impacts patient outcomes and cost of care.”
On the other end, freeing clinicians of administrative burdens allows them to spend more of their time caring for patients, reducing burnout and stress levels.
And because healthcare AI serves multiple stakeholders including operations, compliance and clinical teams, a scalable solution would need to be designed with solid system design principles, with observability, tracing, and monitoring built in right from the very beginning.
Innovaccer’s approach demonstrates the challenges in building a successful system that can work across multiple specialties in real-life hospital scenarios. As integrating AI in healthcare has shifted from building models to building agents, the hardest problem to solve isn’t technical performance, but rather ensuring safety, accountability, and governance.
Tapan’s framing that governance should be treated as enablement, not constraint, feels like an important mindset shift for leaders trying to move beyond the pilot stage. By helping to reduce authorization times and administrative burden, AI can help provide long-term benefits such as better patient care and provider experience.
If you’re interested in learning more, check out the full podcast. In addition, the Department of Health and Human Services recently published updated guidelines for AI, and the CHAI and Innovaccer websites provide useful guidance on the use of agentic AI use in healthcare settings.

Last week, MLCommons released results for MLPerf Inference v6.0, setting new records as the benchmarking suite expands to keep pace with the diversity and scale of real-world AI deployments. Showcasing improved performance, new benchmarks for both data center and edge systems, and unprecedented system scale, the tests come at an opportune time for technology decision-makers facing pressure to move models into production.
The Inference v6.0 suite included 11 benchmarks for data centers and eight for edge. Five of 11 datacenter tests were either new or substantially updated in v6.0, rate of change that reflects just how fast the AI model landscape is shifting. Here’s what’s new:
Lambda tested on the new GPT-OSS 120B benchmark as part of its first-ever Open Division submission, an effort that went beyond standard software tuning into algorithm-level research. The company explored smarter token routing across experts in the mixture-of-experts architecture, selectively directing tokens to the second-best expert when the top choice becomes overloaded.
"There's a basic trade-off between the quality of the result and the load balancing of the system," said Chuan Li, Lambda's chief scientific officer. "If we can tune that trade-off well enough, you can still meet an upper quality standard but get even better throughput."
The approach points to a dimension of inference optimization that many teams overlook. Hardware improves with each generation. Software stacks mature every six months. But algorithm-level creativity on top of both can unlock performance gains that off-the-shelf tuning leaves on the table.
Beyond the data center updates, the suite introduced a new YOLOv11 benchmark for edge, updating the edge object detection benchmark to current industry practice. In a sign of strong interest, 30 submissions were received for this test, the most of any in the edge category.
One of the most interesting trends from the v6.0 data is the rapid growth of large-scale, multi-node system submissions over the last year. The v5.0 release last April included just two multi-node submissions. That number climbed to 10 in v5.1, and further to 13 in v6.0. The largest system submitted in this round spanned 72 nodes and 288 accelerators, quadrupling the node count of the largest system from the prior two rounds.
The shift reflects where enterprise AI deployments are heading. As more AI applications move into production at scale, the demand for large, distributed inference systems is growing as well. This complexity introduces technical challenges, and multi-node benchmarks are better suited to demonstrate system performance under such conditions.
The v6.0 submission roster grew to 24 participating organizations, including first-time submitters Inventec Corporation, Netweb Technologies India, and Stevens Institute of Technology. The full list spans hyperscalers, cloud providers, OEMs, and independent software vendors, making the dataset especially useful procurement analysis.
Lambda was the only AI-native cloud provider to publish results for both inference and training on NVIDIA's Blackwell Ultra platform, benchmarking on both a single-node GB300 system and the rack-scale NVL72. The company treats benchmarking not as a marketing exercise but as an operational checkpoint. "We literally see this benchmark as a part of our new product introduction pipeline," Li said. "Before we offer this product to our customer, we need the product to be benchmarked."
That positioning carries weight for procurement teams evaluating cloud providers. Lambda is platform-neutral, with no proprietary silicon to promote, which gives it a clear incentive to pursue transparent, reproducible results. The company publishes its benchmark code as an open-source repository so customers can verify performance on their own infrastructure.
By adding reasoning models, text-to-video, vision-language, and modernized recommender workloads in a single release, MLCommons is tracking the speed at which the AI workload landscape is changing. Two of the new benchmarks arrived through direct collaboration with industry practitioners: Shopify contributed the VLM dataset using real product catalog data, and Meta drove the updated DLRM model based on its sequential recommendation architecture. That kind of industry partnership keeps the benchmarks grounded in production reality rather than academic abstraction.
For procurement teams, these updates offer practical benefits beyond the headline numbers. Decision-makers can dig into which organizations are submitting on the new benchmarks, how their results scale across node counts, and where software and algorithm optimizations are driving as much lift as hardware. Lambda's Open Division submission is a good example. It demonstrated that creative approaches to expert routing can push throughput higher without sacrificing output quality, the kind of insight that matters when you're sizing infrastructure for production inference.
Looking ahead, Li pointed to the upcoming MLPerf Endpoint format as a significant evolution. Rather than reporting a single throughput number per system, the new format will present a trade-off curve between latency and throughput, giving customers a way to evaluate systems against their specific service-level requirements. That shift would make the benchmarks more directly actionable for organizations balancing real-time responsiveness against batch processing efficiency.
As AI infrastructure decisions get larger and more consequential, MLPerf remains the go-to industry resource where competing systems can be compared on a level playing field. That kind of transparency is not just useful. It is essential.

The AI era is generating investment on a scale that previous technology cycles never approached. The central question facing business leaders has shifted from whether to invest to how to convert that investment into lasting competitive advantage. TechArena Advisory breaks from the traditional consulting model by bringing C-suite operators to the table who have grown multi-billion-dollar businesses from the inside out and understand what it takes to turn investment into value.
Advisor Co-Founder Jeni Barovian has navigated technology waves that reshaped entire industries, from networking and edge computing to data center platforms and AI silicon, holding product and P&L leadership roles at Intel and Altera. What distinguishes her is not just technical depth but a discipline around equipping organizations to drive business outcomes. In this edition of our Q&A series, she discusses the gap between AI investment and realized value, the three areas where that gap most often surfaces, and what it takes to help organizations leverage AI to create scalable competitive edge.
Throughout my career, I’ve worked through several major technology waves — the internet, mobility, and cloud. As disruptive as those were, AI is different.
The pace of change is orders of magnitude faster, and the scale of investment is unprecedented. According to Goldman Sachs, companies spent more than $400 billion on AI infrastructure in 2025 alone — data centers, GPUs, platforms, and the talent to run them. The firm projects that number will swell to $500 billion in 2026. Now, leadership teams across the value chain are under pressure to turn that investment into real economic value.
At the same time, AI is reshaping how companies operate and how work gets done. Nearly all knowledge worker roles will be affected by AI-driven workforce transformation.
This moment requires a different kind of leadership and guidance. The winners won’t just deploy AI — they’ll translate it into productivity, new revenue streams, and lasting competitive advantage.
I’ve spent my career building and scaling complex product portfolios and businesses — networking, communications, edge computing, and data center platforms. That work sits right at the intersection of infrastructure, product strategy, and business outcomes.
My superpower is connecting technology decisions to business impact by equipping and empowering teams to act. That includes navigating some of the most complex forms of organizational change — businesses scaling through major technology transitions, and M&A from both sides of the table. When you’ve led through those conditions, you develop a sharper instinct for where strategy actually holds under pressure and where it doesn’t.
In this moment, companies are moving incredibly fast, but speed alone doesn’t create value. AI can streamline development, analysis, and execution across nearly every function — engineering, product management, marketing, operations. But without clear strategic direction, you get a lot of noise and homogeneity.
What organizations need right now are experienced operators who understand how to turn new technology into differentiated products, stronger go-to-market strategies, and measurable business results.
AI can accelerate everything, but only strategy turns that acceleration into value, and proven operators can bring that strategic judgement and clarity.
The most common challenge I see right now is a gap between AI investment and realized value. Companies have invested heavily in infrastructure and tools, but many leaders are still figuring out how to translate that capability into real business impact. That challenge typically shows up in three areas where I spend most of my time advising.
First is product and technology strategy — identifying where technology actually creates differentiated value rather than just adding features.
Second is P&L optimization — ensuring new capabilities are built and sold in ways that generate revenue growth, and the organization is equipped to operate with maximum efficiency. That calculus increasingly includes sustainability: in my experience, companies that treat environmental requirements as a financial lever — not as a compliance exercise — tend to build more resilient P&Ls.
Third is organizational execution — aligning teams, workflows, and decision-making so the company can move at the pace the technology now allows. That often extends to governance — ensuring boards and senior leadership have clear accountability structures for transformation commitments, not just results.
My background running product organizations for multi-billion-dollar businesses bridges the gap between technology ambition and operational reality.
Three areas stand out to me right now:
First, turning AI infrastructure into economic value. Companies have already made enormous investments in compute, data platforms, and tooling. The question now is where and how that translates into top-line growth, operational velocity, and competitive advantage.
Next, product differentiation in an AI-accelerated world. When everyone has access to similar tools and models, the real differentiator becomes strategy — how companies apply AI to meet customers where they’re at today, and solve meaningful problems.
Finally, organizational adaptation. AI is changing how work gets done across nearly every function, and leaders need to rethink processes, decision-making, and team structures to take full advantage of it. The leaders who get this right invest as seriously in their people as in their platforms — because AI amplifies human capability, it doesn’t replace it.
These three shifts — infrastructure, products, and people — will determine who leads the next wave of technology.
I focus on helping companies accelerate their path to business impact.
Most leadership teams already know what they want to achieve. Where they get stuck is the gap between a compelling strategic vision and the organizational readiness to execute on it. I’ve sat on both sides of that gap — as an operator accountable for delivering results under pressure, and as someone who has led businesses through the kind of structural change where that gap can widen fast if you let it.
In working with leadership teams, my goal is to help them identify where technology can create the most meaningful value — whether that’s new revenue streams, faster innovation cycles, improved operational efficiency, or stronger market positioning.
I help translate strategy into execution so teams can move quickly and confidently. That’s when transformation becomes real.

When leaders think about cybersecurity incidents, they often picture highly sophisticated attacks launched by external adversaries using advanced tools and malware. These scenarios dominate headlines and executive discussions. In reality, many of the most serious data exposure incidents do not begin with complex technical breaches. They begin with a routine human action inside the organization.
An employee forwards a document to a personal email account to continue working after hours. A team member shares internal files with a partner to move a project forward more quickly. A departing employee emails themselves information they believe they helped create.
Individually, these actions may seem harmless. Collectively, they represent one of the most common ways sensitive information leaves organizations today. What makes this risk especially challenging is that it rarely resembles a traditional security incident at the outset.
For leaders, this is a critical blind spot.
Email remains one of the most widely used tools in organizations. It enables collaboration, supports distributed teams, and connects employees with partners and customers. Because email is deeply embedded in daily work, it is often viewed as a productivity tool rather than a potential risk vector.
Research consistently shows that human actions play a major role in data exposure incidents. The Verizon Data Breach Investigations Report highlights that human error and misuse remain significant contributors across industries. Many of these incidents involve employees unintentionally sending information to the wrong recipient or sharing sensitive files outside the organization.
These actions are rarely malicious. In most cases, employees are simply trying to work more efficiently. The leadership challenge lies in recognizing that routine decisions can carry serious consequences.
Once sensitive information leaves the organization through email, it can quickly spread beyond control. When data reaches personal accounts, unmanaged devices, or external parties, recovering it becomes extremely difficult.
Email-driven data loss is frequently underestimated because it does not trigger the same alerts as malware or system intrusions. The activity often appears legitimate: an authorized employee sends an email from a corporate account, and the content may not contain obvious indicators that automated tools detect. This creates a dangerous gap between intent and impact.
Traditional security tools were designed primarily to identify overtly malicious activity, such as unauthorized access or suspicious software. They are far less effective at detecting subtle behaviors that lead to data loss through normal communication channels.
As a result, organizations often discover these exposures only after information has already left their environment. Research from the Ponemon Institute (Cost of Insider Risks Global Report 2023) shows that insider-related incidents, including accidental data sharing, continue to grow in both frequency and cost, and often take longer to detect because they occur through legitimate access paths.
For leadership teams, this means the greatest risk does not always come from external attackers. It often comes from ordinary actions that blend seamlessly into everyday work.
Addressing this challenge requires leaders to move beyond a purely technical view of security and examine how information is actually used inside the organization. Security controls may define how data should move, but daily work determines how it truly flows.
Modern employees operate in highly connected environments. Remote work, hybrid teams, and constant collaboration with external partners allow information to move across devices, platforms, and organizations faster than ever before. At the same time, many organizations maintain strict data policies that were designed for more controlled environments and do not always align with how work is performed today.
When policies feel disconnected from real workflows, employees often adopt informal workarounds to stay productive. Email frequently becomes the bridge between systems, devices, and teams. Documents are forwarded to personal accounts to continue work after hours, shared with external collaborators to accelerate projects, or moved between platforms that are not fully integrated.
These actions are rarely malicious. In most cases, employees are simply trying to solve problems and move work forward. Yet these everyday decisions can unintentionally expose sensitive information outside the organization’s control.
This reality highlights an important shift in modern cybersecurity thinking. The most significant risks do not always originate from sophisticated external threats. They often emerge from normal human behavior operating within complex systems. Organizations that recognize this dynamic begin to design security strategies that guide behavior, support safe collaboration, and align protection with how people actually work.
Recognizing the human element of data protection creates an opportunity for more effective leadership. Rather than focusing solely on preventing mistakes, organizations should aim to make secure behavior the easiest option. This requires clear communication, supportive technology, and a culture that values responsible information sharing.
Effective leadership approaches typically include:
When employees understand both the risks and the reasons behind security practices, they are more likely to follow them.
Historically, many organizations addressed data exposure only after discovering that sensitive information had already been shared externally. This reactive approach forces security teams to respond after the damage may already be done.
Modern organizations are shifting toward proactive awareness. By understanding how information typically flows across the organization, leaders can spot unusual patterns earlier and intervene before significant exposure occurs.
Just as importantly, prevention strategies can guide employees at the moment of decision. Contextual warnings, reminders, or policy prompts can encourage employees to pause before sending sensitive information outside the organization.
These small interventions can significantly reduce risk without hindering productivity.
Email-driven data loss highlights a broader truth about cybersecurity leadership. Security challenges are not solved by technology alone. They are shaped by people, processes, and culture.
Executives set the tone for how seriously information protection is taken. Managers influence how teams collaborate and share data. Employees ultimately determine how information moves through daily workflows. Organizations that successfully protect sensitive information recognize this shared responsibility. They invest not only in security tools, but also in awareness, communication, and leadership engagement.
The goal is not to eliminate human involvement in data handling that is neither realistic nor desirable. The goal is to guide behavior in ways that support both productivity and protection. In many organizations, the next data exposure will not begin with a complex cyberattack. It will begin with a simple email. Leaders who recognize this reality are far better positioned to prevent it.

The moments that help define an employee's trajectory, including performance reviews and manager feedback, are too consequential to get wrong. AI promises to help managers be better prepared for these important conversations by presenting clear insights that draw from the sea of daily work data. But it can only deliver when it is trusted on all sides.
In my recent conversation with Maher Hanafi, senior vice president of engineering at Betterworks, and Solidigm’s Jeniece Wnorowski, we discussed what it takes to turn AI’s potential into a trusted and valued enterprise solution.
Betterworks describes itself as a talent and performance management platform for global enterprise customers, but Maher is quick to distinguish it from traditional HR software. Where legacy tools function as administrative record-keepers by tracking history, storing documents, and managing lists, Betterworks aims to orient its platform around the flow of work.
“We were looking at the data from a performance lens,” Maher explained. “We’re trying to enable anything that helps go beyond just tracking history…to focus more on the flow of work.” For large enterprises with complex organizational structures spanning multiple regions, that means helping individuals, managers, and business units connect their daily efforts to company-wide goals, a capability that only becomes more valuable, and more technically demanding, as AI matures.
Maher offered the useful frame of thinking about AI as enabling “horizontal intelligence.” Before AI, Betterworks’ modules — goals, feedback, one-on-one meetings, talent and skills — operated as largely separate domains. Generative AI has made it possible to interconnect those domains in ways that weren’t previously practical.
“With AI today, it’s just way easier to interconnect all of these,” he said. “I think SaaS products and SaaS platforms will be built as more of an interconnected set of layers that will break the silos between different components and features.”
In practical terms, this means a manager preparing for a one-on-one meeting can receive and review AI-generated insights drawn from an employee’s recent goals, feedback, and performance history before a conversation, rather than manually pulling together and examining months’ worth of data.
When AI provides insights that can influence such important conversations, it’s paramount that all parties can trust the system’s output. Operating in this environment, Betterworks has emphasized responsible AI guided by two principles in particular: transparency and explainability. Transparency means the system can show users what sources it drew on to generate a response. Explainability means users understand why an AI suggestion is what it is. With this foundation, when managers are giving feedback to employees based on information AI provides, they can make suggestions and have confidence in the underlying insights.
“We are trying to use AI as a way to really get you as a better individual, better member of the organization and contributing to the big picture versus having AI take control,” Maher said. “You should be in the driver’s seat. AI is just there to help you and be a co-pilot, nothing else.”
As the conversation turned to broader lessons, Maher offered practical guidance for engineering and technology leaders navigating AI adoption inside enterprise organizations.
His first recommendation is simply to stay informed without becoming overwhelmed. “AI is moving very fast…. Picking the one out of the haystack is very challenging,” he said. To manage that, he created what he calls an AI Engineering Lab at Betterworks, a structured environment where engineers could explore tools and run experiments, rather than waiting for top-down mandates on which technology to adopt.
He also urged leaders to take the financial dimension seriously. “There was a huge risk of AI taking too much money without achieving ROI,” he said. “Turning into someone who cares more about the financial aspect and looking at costs on a frequent basis…was a huge success.” In his view, senior technology leaders increasingly need to think with some of the rigor of a chief financial officer when it comes to managing AI infrastructure spend.
Finally, he pointed to the value of frameworks. His own AI maturity framework and a flywheel model focused on planning, building, and optimizing AI systems have helped keep the team oriented even as the technology underneath them continues to shift.
Maher’s perspective reflects a measured but substantive view of what AI can deliver in enterprise software, one grounded in the realities of compliance-heavy industries and the organizational complexity of global customers. Rather than positioning AI as a transformation layer bolted onto an existing product, Betterworks has committed to rebuilding the platform’s foundations to make intelligence a native capability. For technology decision makers evaluating AI-powered SaaS in regulated environments, the Betterworks story offers a useful model.
Learn more about Betterworks at betterworks.com, and watch our full podcast episode.

Technical tools exist for almost every problem, but the human side of change frequently lags behind.
Managers today serve as the pressure valve for their organizations, navigating hybrid teams and AI disruption while trying to keep people engaged. Senior teams often struggle to provide clear organizational direction to prevent burnout and retain key talent during these disruptive times.
TechArena Advisory provides a high-impact alternative to traditional consultants by offering the strategic blueprints of operators who have already scaled global businesses. Dana Bos brings expertise at the intersection of people, strategy, and operations to help our clients move faster and with far less friction. She shares her insights on building manager readiness and shaping everyday ways of working that keep teams connected.
The ground has shifted under tech leaders’ feet, regardless of the sector or the size of their company. AI has moved from “interesting experiment” to “core to our strategy” almost overnight. Products, roles, and expectations are shifting in months, not years.
Many companies are trying to respond with teams and management systems that are already stretched. I’m seeing a lot of good intentions but not a lot of support for the people who actually have to make these changes. Managers are improvising at times without clear organizational direction, employees are fatigued by constant pivots and mixed messages, and systemic change management is getting overlooked. I want to help leaders at all organizational levels respond to and meet this seminal moment, in a way that drives sustainable momentum and attracts/retains talent.
My expertise sits at the intersection of strategy, people, and day-to-day operations. Leaders are under immense pressure to deliver results, navigate new tech, and retain their key people. If you put an A player in C system, the system will win every time. The talent market is extremely competitive right now – you need an operating model that drives speed and value while also creating a culture that invigorates your talent and makes them want to stick around.
I help translate high-stakes challenges into actionable strategies their teams can execute, building leadership at every level so success doesn’t rest on a few heroic leaders. We build the skills that power the operating model with consistent concrete behaviors, productive conversations, and repeatable routines.
When it comes to change management, I work with leaders to translate the initiative and goals into a plan that focuses on what will change day to day: what managers say on Monday, what teams feel in the next all-hands, and how this lands for the talent you need to keep. In a moment when AI and constant change are colliding with real human limits, that grounded, people-centered focus keeps execution on track and results sustainable.
The first big challenge I see is a highly competitive talent landscape. I also see companies recognizing the need to change their strategy or positioning and working hard to get the rest of the company on board and moving with them. Finally, leaders are trying to roll out internal AI-enabled workflows while their people are anxious, confused, or quietly resisting. The tools are there, but the human side of change is lagging. As work becomes more distributed and automated, it’s easy for trust, candor, and accountability to erode.
The need for increased manager capability has never been more pivotal. Many leaders were promoted for technical excellence and now find themselves running hybrid teams, navigating AI, and trying to keep people engaged when the strategy and work shifts, which requires a very different skillset from technical delivery.
My work sits right at those intersections: structuring organizational growth and change in a way people can easily follow, building strong managers who can run healthy, high-performing teams, and shaping everyday ways of working that keep people connected and willing to speak up.
Right now, I see a few areas where getting it right has an outsized impact. The first is how leaders talk about and model the use of AI internally and how they talk about the company’s products and services intersecting with the markets during this disruptive time. If senior teams are vague or inconsistent, everyone else feels it and fills in the gaps with fear. Second, manager readiness. Managers are the pressure valve for almost everything right now, and if they don’t have the skills and support to lead through change and uncertainty, you will see it in burnout, rework, and loss of talent. That includes how they run meetings, how they handle tension, and how they coach people who are worried or skeptical. The third is the quality of cross-functional collaboration. With work and teams dispersed, botched handoffs and misunderstandings are expensive. The basics—how decisions are made, how conflict is handled, how wins and misses are talked about—either build trust and ownership or quietly undermine them. When these areas are neglected, even strong strategies stall. When they’re addressed, organizations move faster and with far less friction.
I help organizations bridge the gap between strategic goals and daily execution. By developing leaders who communicate with clarity and managers who foster collaborative, AI-empowered teams, I help you build a culture where high performance and engagement coexist. Over time, this manifests as cleaner execution on priorities, better retention of the key players, and an organizational resilience that turns change into a competitive advantage rather than a source of friction.

The tech landscape is shifting at a breakneck pace, pushing the boundaries of traditional infrastructure and demanding a new level of strategic agility. As AI labs and hyperscalers reshape the competitive environment, there is an urgent need for leaders who can translate the complex data center and cloud ecosystem into differentiated narratives and modernized go-to-market strategies.
Following the recent launch of the TechArena Advisory, we are excited to highlight the exceptional operators bringing C-suite-grade strategic intelligence within reach of organizations at every stage of growth. The Advisory represents our commitment to providing a high-impact alternative to traditional consultants, offering the strategic blueprints of those who have already built and scaled multi-billion-dollar businesses.
Raejeanne Skillern has spent her career navigating these shifts, scaling organizations and driving disproportionate value for global enterprises. With 30 years of executive leadership spanning from silicon to solutions, she understands the precision required to move from “chaos to strategy” in the AI race. To help our audience get to know the experts behind the Advisory, we are continuing our “5 Fast Facts” Q&A series. In this edition, Skillern discusses the pertinent need for adaptability, the importance of strong partnerships in the neocloud era, and her personal commitment to stepping into the arena alongside the next generation of business leaders.
The explosion of AI, specifically LLMs and agents, has pushed data center and cloud expertise into the center of every industry. Companies must rapidly evolve their business and marketing strategies to take advantage of the trillions of dollars' worth of market opportunity being created. The ecosystem is rapidly expanding, with Neoclouds, AI labs, and thousands of startups joining the hardware providers and Hyperscalers in this AI race.
I see an urgent need for experts who can distill this chaos into a clear strategy that drives real business acceleration.
As a 30-year student of and executive within the cloud, data center and AI technology industry spanning silicon to solutions, I have an extensive track record in building and scaling technology-based businesses. My proven success in owning and growing multi-billion-dollar P&Ls has earned me a reputation as a change agent, navigating the complex tech landscape to drive disproportionate growth for companies.
Leaders are currently caught between two fires: the need for immediate resource efficiency and the need for extreme flexibility to keep up with AI innovation. They are faced with optimizing or customizing their products and roadmaps for resource feasibility and agility to adapt to the speed at which market adoption is moving. They must modernize global go-to-market execution to provide both intelligent and connected workflows across the marketing and sales teams, leveraging personalized campaigns that meet customers along their unique journeys. Many companies may also be working with hyperscale cloud providers and silicon technology companies in new ways and need to evolve their selling, partnership, and engagement strategies to build strong partnerships with the emerging leaders in this space.
Helping startups scale, modernizing global go-to-market execution, breaking through with differentiated corporate leadership narratives in a noisy arena, and pressure testing business strategy for competitive strength, end user value, and adaptability to a rapidly moving industry.
Throughout my career, I have always appreciated when business leaders, board members, and technology experts have partnered with me to break down my challenges and ideate with me on how I can improve myself, my organization, and my business. I want to be that support structure for growing businesses. I want to be their right hand, stepping into the arena with them whether setting a business, product, or marketing strategy, positioning an organization to scale for growth, or translating the complex data center, AI, and cloud ecosystem to modernize sales and partnerships.

In a strategic move that alters its long-standing business model, Arm this week unveiled the Arm AGI CPU, its first-ever production silicon product designed for data center AI infrastructure.
While the company has spent 35 years providing the blueprints for others to build upon, the AGI CPU represents Arm’s first direct entry into the commercial silicon market.
Developed in collaboration with Meta and enabled by Synopsys’ full-stack design portfolio, the AGI CPU is aimed at a burgeoning category of agentic AI workloads. These tasks, which involve AI models that reason, plan, and execute tasks autonomously, require high levels of scalar performance and memory throughput.
Bringing a 136-core, 3nm processor to market as a first-time silicon vendor required a comprehensive design infrastructure. Arm utilized Synopsys’ end-to-end portfolio, spanning electronic design automation (EDA), silicon-proven interface IP, and hardware-assisted verification (HAV).
The technical workflow leveraged Synopsys’ EDA solutions to manage the complexity of advanced process nodes. These tools supported synthesis, power integrity analysis, and signoff timing, which were necessary to meet the performance-per-watt targets specified for next-generation AI environments.
To manage data movement, Arm integrated Synopsys’ interface IP solutions. These components act as the critical communication links within the SoC, facilitating high-speed data transfer between the CPU cores and external memory or accelerators. By using pre-validated IP, Arm aimed to reduce the inherent risks associated with first-pass silicon.
Verification played a central role in the development timeline. Using the Synopsys ZeBu Server 5 emulation system and HAPS prototyping platforms, Arm’s engineering teams were able to validate system functionality and software compatibility months before the physical chips returned from the foundry. This “shift-left” strategy is a standard industry practice to ensure that hardware and software are ready for deployment simultaneously.
Mohamed Awad, executive vice president of the Cloud AI Business Unit at Arm, noted the collaborative nature of the project.
“The Arm AGI CPU reflects the strength of our SoC design and the effectiveness of our collaboration with Synopsys,” he said. “Their design, IP, and verification solutions supported the development and validation of our breakthrough performance-per-watt chip for next-generation AI infrastructure.”
The AGI CPU features up to 136 Arm Neoverse V3 cores per socket, operating within a 300-watt thermal design power (TDP). Built on TSMC’s 3nm process, the chip utilizes a dual-chiplet architecture. It supports 12 channels of DDR5 memory at speeds up to 8800 MT/s, providing approximately 825 GB/s of aggregate bandwidth. For I/O, the processor includes 96 lanes of PCIe Gen 6 and native CXL 3.0 support for memory expansion.
Arm’s internal data suggests that the AGI CPU can provide a 2x increase in performance-per-rack compared to current x86 platforms. By targeting “agentic” workloads, Arm is positioning itself to handle the coordination and data-orchestration tasks that sit alongside dedicated AI accelerators like GPUs.
Arm’s shift from an IP architect to a merchant silicon provider is technically impressive, but it creates a delicate situation with its existing licensees. Companies like NVIDIA, AMD, and Intel, who all license Arm IP, now find themselves competing directly with their technology provider in the data center. Arm will need to manage these relationships carefully to avoid appearing to favor its own silicon over the IP it sells to others.
The 300W TDP for a 136-core part is a clear attempt to challenge x86 dominance in power-constrained data centers. In an era where power availability is the primary bottleneck for AI scaling, Arm’s decision to focus on performance-per-watt is a pragmatic entry strategy. However, the true test will be real-world software optimization and how effectively the AGI CPU handles the “unstructured” nature of agentic AI compared to established general-purpose processors.
The naming of the AGI CPU is a bold marketing move. While the chip is designed to support the infrastructure for autonomous AI agents, the term AGI (Artificial General Intelligence) remains a theoretical milestone in the research community. By tethering its first chip to the most hyped acronym in tech, Arm is signaling its long-term intent, though the industry will likely judge the silicon on its IPC and latency metrics rather than its nomenclature.
This launch reinforces Synopsys’ position as the necessary scaffolding for the custom silicon era. Whether it is a hyperscaler like Meta or a traditional IP house like Arm, the move toward specialized silicon is increasingly dependent on a unified full-stack design flow. For Synopsys, enabling a first-time silicon vendor to hit 3nm targets is a strong proof-of-concept for their “HAV-to-Silicon” methodology.

This morning in Paris and Riyadh, Globeholder AI officially pulled back the curtain on its Thinking Lab, a platform that signals a fundamental shift in the AI trajectory. While the tech world has spent the last few years obsessed with the creative (and often hallucinatory) capabilities of large language models (LLMs), Globeholder is betting on a different flavor of intelligence: Type-2 Reasoning for the physical world.
The core thesis of Globeholder, led by co-founders Milene Göknur Jubin, PhD, and Eren Ünlü, PhD, is refreshingly blunt: "The world is not made of text."
Most AI systems today rely on fast pattern recognition, what cognitive scientists call Type-1 reasoning. These systems excel at predicting the next word in a sentence, but they stumble when asked to authorize a $2.1 billion investment in North Sea offshore wind farms. Why? Because energy systems, infrastructure networks, and climate patterns aren’t linguistic constructs; they are governed by physics, regulation, and logistical constraints.
Globeholder’s Thinking Lab is designed to bridge this gap by acting as a "sovereign, computational software environment" where AI agents operate like scientific teams. Rather than providing a probabilistic guess, the platform deconstructs complex questions into physical components, runs simulations, and stress-tests assumptions.
From a deep tech perspective, the Thinking Lab’s architecture is its most compelling feature. Built on a modular, partner-enabled framework, it functions as an operating system for physical-world intelligence.
Key technical pillars include:
The platform’s 6-step workflow, moving from question decomposition to auditable decision delivery, aims to replace the months-long manual analysis typically performed by high-priced consulting firms with transparent, empirical answers delivered in minutes.
Globeholder isn’t going at this alone. The startup is part of the NVIDIA Inception program and has deeply integrated its tech with NVIDIA’s Earth-2 and Cosmos models for large-scale weather and climate modeling. On the infrastructure side, the platform is deployed on AWS, ensuring the performance and resilience required for what they call "sovereign-grade decision-making."
The most striking revelation in the Thinking Lab release isn’t the AI itself, but how it intends to dismantle the traditional "trust-by-proxy" model of strategic consulting.
Globeholder’s competitive differentiation makes a compelling case for why the current status quo is failing high-stakes industries:

Following this week's launch of the TechArena Advisory, we are excited to highlight the exceptional operators who are now bringing C-suite-grade strategic intelligence within reach of organizations at every stage of growth. While TechArena’s foundation is built on media and tech domain marketing, the Advisory represents our commitment to providing a high-impact alternative to expensive traditional consultants. We believe that in an era of rapid disruption, organizations don’t just need advice, they need the strategic blueprints of those who have already scaled multi-billion-dollar businesses.
To help our audience get to know the experts behind the Advisory, we are launching “5 Fast Facts,” a twice-weekly Q&A series. Our first featured advisor is Lakecia Gunter, an enterprise growth architect with a career defined by leading global teams and mastering the intersection of technology and revenue. Below, Lakecia shares her perspective on the widening gap between tech ambition and business execution.
We’re at a moment where technology disruption is moving faster than most organizations can operationalize. AI, platform ecosystems, and digital infrastructure are redefining how companies compete, but many leaders are realizing that adopting technology and scaling it into enterprise growth are two very different things.
I see a widening gap across industries between technology ambition and business execution. Companies are investing heavily in AI and digital capabilities, but many are still figuring out how to connect those investments to revenue growth, ecosystem expansion, and long-term competitive advantage.
After decades of leading global teams responsible for multi-billion-dollar businesses, partner ecosystems, and product platforms, I’ve seen firsthand how technology becomes growth, or fails to.
This moment makes advisory work a priority because leaders need more than technical insight. They need guidance from operators who understand how to translate innovation into enterprise scale, revenue expansion, and lasting market leadership.
My experience sits at the intersection of technology innovation and enterprise growth.
Throughout my career, I’ve led global organizations responsible for multi-billion-dollar revenue streams, large partner ecosystems, and enterprise transformation initiatives. At Microsoft, I helped lead strategy and technical engagement for one of the world’s largest partner ecosystems. Earlier roles included direct P&L responsibility for global business units and helping scale new technology platforms into global markets.
What this brings to the moment is the perspective of an enterprise growth architect, someone who understands how technology strategy, revenue models, partner ecosystems, and organizational alignment all work together.
My superpower is helping leadership teams turn emerging technology opportunities into scalable business growth. That means aligning strategy, ecosystems, and operating models so innovation doesn’t stay in pilot mode—it drives real market impact.
Many business leaders today are navigating a difficult balancing act: investing aggressively in new technologies while ensuring those investments translate into real enterprise value.
Three challenges consistently surface in my work.
The first is AI and digital transformation execution. Organizations are experimenting with AI, but many struggle to operationalize it for measurable growth or operational efficiency.
The second is ecosystem monetization. Innovation increasingly happens through platforms and partnerships, yet many companies have not fully developed the strategies required to activate partner ecosystems as engines of growth.
The third is aligning technology investment with revenue outcomes. Digital transformation programs often focus on tools and infrastructure without clearly tying them to market expansion, customer value, or competitive differentiation.
My work helps leadership teams connect these dots, aligning technology strategy, partner ecosystems, and operating models to unlock scalable enterprise growth.
Three areas stand out as particularly critical for technology and business leaders today.
The first is AI strategy and governance. As AI moves into core business operations, organizations must balance speed of innovation with responsible deployment, security, and regulatory oversight.
The second is platform and ecosystem strategy. The most successful companies today are not building in isolation, they are architecting ecosystems. Leaders who understand how to activate partners, developers, and platforms will scale innovation far faster than those operating alone.
The third is enterprise growth architecture—ensuring that technology investments are tied to clear revenue models, market expansion opportunities, and long-term strategic positioning.
Organizations that master these three disciplines will be the ones that convert technological disruption into sustained competitive advantage.
My work centers on one core objective: helping organizations turn technology disruption into enterprise growth.
First, I work with leadership teams to build clear growth architectures, linking AI strategy, platform investments, and ecosystem partnerships directly to revenue expansion and market opportunity.
Second, I help organizations activate partner ecosystems as growth multipliers. When companies synchronize the right partners, platforms, and developer communities, they dramatically accelerate innovation and customer reach.
Third, I support leaders in creating operating models that scale transformation, ensuring that new technologies move beyond pilot programs into enterprise-wide impact.
Ultimately, the goal is simple: help companies move from experimentation to execution, ensuring investments in AI and digital platforms translate into measurable growth, stronger market positioning, and long-term competitive advantage.