Synopsys Converge 2026: From Silicon to Systems
Converge 2026 became the first real stage for the new Synopsys.
That message came through clearly as the company used its first major post-acquisition gathering to argue that the future of engineering will no longer be built in silos. In the AI era, the old boundaries between chip design, software development, system simulation, packaging, and physical behavior are collapsing. In their place is a more connected engineering stack built to handle what Synopsys President and CEO Sassine Ghazi called “the era of pervasive intelligence where AI is infused everywhere.”
Ghazi called 2026 “the year one of the new company,” a line that framed Converge as the first full public expression of what the Synopsys-Ansys combination is supposed to mean in practice. The company is still very much talking about EDA, verification, and IP, but it is now making a larger claim: that it can help customers engineer the next generation of intelligent systems from silicon to systems.
This is no longer just a story about designing chips faster. It is a story about building products that are increasingly software defined, physically complex, AI-enabled, and deeply dependent on coordination across multiple engineering domains.
Ghazi gave that vision a memorable turn of phrase when he said that as physical AI advances, “bits will inhabit and command the atoms.” It sounds dramatic, but it neatly captured the company’s larger point. The next wave of intelligent systems will not live only in data centers or on phones. They will increasingly operate in the physical world, which means engineering teams have to account for much more than logic, layout, and software alone. They have to account for thermals, mechanical stress, power delivery, signal integrity, materials, reliability, and environment, often all at once.
That broader shift was echoed in Synopsys’ technical keynote by Shankar Krishnamoorthy, the company’s chief product development officer, who described the market pressure now bearing down on silicon teams.
“The clock of our semiconductor industry has changed,” he said. “We are now on a one-year clock.”
In AI infrastructure, he argued, there is “absolutely no way to compromise” across performance, velocity, and quality. Teams are expected to deliver generation-over-generation gains faster than Moore’s Law can naturally provide, while also shipping increasingly complex devices that have to work right the first time.
Why AI is Forcing Engineering to Converge
.png)
One of the most valuable sessions of the day came from the executive panel moderated by Allyson Klein, where leaders from across industries put real-world texture behind Synopsys’ argument. Their examples made clear that the pressure to converge engineering disciplines is already here.
If there was one repeated theme across the panel, it was the need to move learning earlier in the process. Todd Citron put it simply:
“A lot of this is about moving things left,” he said.
He described the shift not just as a matter of performance optimization, but as a way to bring lifecycle considerations into the design process much sooner.
“Not just optimizing the design for performance,” he said, “but being able to optimize the design over the life cycle by virtue of having that digital twin.”
That idea came up again and again. Panelists described a world in which software updates, new model architectures, and customer expectations are all moving too fast for traditional handoffs between engineering disciplines. Ravi Subramanian, the company’s chief product management officer, boiled it down to three drivers: products are becoming software defined and therefore “have to be silicon powered,” the pace of innovation is accelerating, and the historical “build it, test it, break it” model is simply too expensive and too slow.
Where the Silicon-to-Systems Case Got Real
One of the sharpest observations came when Subramanian recalled a customer telling him, “I don’t have shift, I have no more left to shift.”
Everyone in engineering wants to “shift left,” but the panel made clear that doing so at scale is not just a tooling problem. It is a people problem, a workflow problem, and a modeling problem. Companies may understand the need to simulate more, validate earlier, and co-design across domains, but many are still building the capability required to make that practical.
“Shift left” Meets Real-World Limits
That tension was especially clear in the discussion of multiphysics. Synopsys spent much of the event arguing that effects once treated as downstream concerns now have to be considered much earlier in the process, especially in AI superchips and advanced packaging. The panel gave that argument some real-world proof.
“We’re going from building a one-story house now to building a multi-story tower,” Subramanian said, referring to the chip industry’s move toward stacked and heterogeneous systems.
In autonomous vehicles, Mercedes-Benz VP of R&D Sundararajan Ramalingam described the challenge as even broader.
“Autonomous driving is one of the most complex systems that we have within the automotive,” he said.
He ticked through optic physics, thermal physics, sonar physics, electromagnetism, and voltage fluctuations, then cut to the heart of the matter:
“There is no way on the planet that we will be able to test it on the road against all corner cases,” he said.
Jean Boufarhat, VP and head of silicon, Reality Labs at Meta, brought the same point into consumer hardware. Building smart glasses, he said, means working across “material science,” “optical and display integration,” sensor technologies, audio, hearing, cameras, and battery constraints, all inside a form factor that a person would actually want to wear all day. In that context, co-design stops sounding like strategy jargon and starts sounding like survival.
Multiphysics Moves to the Front of the Process
Perhaps the strongest moment of the panel came from a concrete automotive story. Ramalingam described how a radar system had performed correctly in supplier testing and again in development vehicles. Everything looked release-ready. Then, late in the process, teams started seeing false braking events. Software was the first suspect. After deeper analysis, the issue turned out not to be the code at all, but a vehicle variant with metallic paint in a specific bumper design. The metallic particles were interfering with the radar’s behavior in ways that had not appeared earlier.
“Why could not we catch this in simulation?” he said.
That question landed harder than any product slide because it made the stakes obvious. It was not a theoretical argument for better tools. It was a real example of why modern systems have to be modeled more completely and much earlier. In a software-defined product, a physical material choice can quietly undermine sensor performance, trigger software-level consequences, and create a customer-facing problem late in the release cycle. That is the kind of systems problem Synopsys is trying to position itself around.
Its flagship answer at Converge was Multiphysics Fusion, the biggest technical story of the event. Ghazi described the value of the technology as “going from an over-designing to co-designing.” Krishnamoorthy went further, explaining that Synopsys is natively integrating engines for IR drop, thermal, and stress analysis into design and signoff tools like Fusion Compiler, PrimeTime, PrimeClosure, and 3DIC Compiler. The point is not merely to add more analysis. It is to pull that analysis into the mainline design loop earlier enough to reduce late-stage iteration, excess guard bands, and unnecessary performance or power sacrifices.
Krishnamoorthy described the benefit plainly.
“Much, much fewer iterations at the very end of the flow,” he said. He also argued that tighter native integration helps save PPA “because you are not margining so much in your design.” That matters in a market where advanced packaging, HBM integration, and system-level constraints are becoming central to AI infrastructure. Intel reinforced the point in a video appearance, saying, “Multi-physics analysis is critical to enabling advanced packaging,” especially as thermal, structural, and electromagnetic effects become more dominant in the angstrom era.
Digital Twins Expand Beyond the Physical Product
Converge’s second major product thread, the Electronic Digital Twin Platform, also aligned closely with what panelists discussed. Synopsys is trying to expand the meaning of a digital twin beyond the structure of a product and the environment around it. As Ghazi put it, modern products increasingly need a digital twin of the electronics, too.
For automotive in particular, the company described the platform as “the operating system that you need in order to design a digital twin for an autonomous vehicle.”
That is a bold framing, but the logic is clear. If software-defined products are going to evolve after shipment, and if system behavior increasingly depends on interaction among electronics, code, physical constraints, and environmental conditions, then development teams need something more than isolated simulations and physical benches. They need a platform that lets partners, silicon models, software-in-the-loop systems, and test environments come together in a reusable way.
The discussion of iteration and lifecycle learning gave that idea more weight. Citron argued that a digital twin can help teams optimize not only for performance, but also for sustainment and maintainability over time. In industries where products remain in the field for years, that becomes a strategic advantage, not just a design convenience.
Verification has to Evolve with the Workload
Verification was another area where Synopsys tried to stretch beyond a conventional EDA story. Across the keynotes, the company repeatedly described the burden of validating AI-era silicon and systems, with Krishnamoorthy estimating that teams may need “quadrillion” verification or validation cycles to gain enough confidence in increasingly complex software and hardware stacks. Synopsys’ answer is software-defined hardware-assisted verification: improving capacity, performance, and debugging through software innovation layered onto HAV systems, rather than relying only on hardware refreshes.
The most forward-looking part of Converge may have been the discussion around AI and engineering workflows. Synopsys has clearly moved beyond talking about AI only as an optimizer or assistant. Its road map now centers on more agentic workflows, and its executives used the event to argue that the next step is not just smarter tools, but a different operating model for engineering teams.
That theme surfaced forcefully on Allyson’s panel. Boufarhat described the shift as moving from “human in the loop” to “a person on the loop, an orchestrator rather than somebody who is actually tinkering and building every step along the way.”
Citron pushed the conversation deeper. If machines can iterate much faster than humans, he asked, then what becomes the role of the person, and “what does good look like?” That is a bigger question than it first appears. It suggests that engineering value may increasingly shift from manually executing every step to defining objectives, judging outcomes, and orchestrating more automated flows.
Subramanian added another layer.
“AI can’t be spoken about without the data strategy in the company,” he said.
Synopsys can build agents, assistants, and orchestration frameworks, but companies still need the underlying data, relationships, and institutional knowledge that make those systems useful. Without that foundation, AI becomes a demo. With it, AI starts to change how engineering work actually gets done.
The Bigger Test for the New Synopsys
The panelists were not naïve about the challenge. They talked about training, intuition, and the risk that earlier-career engineers may not develop deep systems judgment if more work is automated too early. They talked about needing deterministic tools in regulated domains like automotive. They talked about the need for courage in adopting workflows that will inevitably change roles and expectations.
That last point echoed Subramanian’s closing thought:
“Everything will happen faster than we think it’ll happen, and everything will be more disruptive,” he said.
Synopsys came to the event with a large portfolio story to tell, talking about Multiphysics Fusion, digital twins, software-defined verification, interface IP, AI-assisted design, and agentic workflows. But the real takeaway was not the number of announcements. It was the coherence of the larger argument. Synopsys is betting that AI-era products will force engineering teams to work across silicon, software, physics, and systems much earlier and much more tightly than before.
The pressure is already visible in autonomous driving, aerospace, consumer devices, and AI silicon. The challenge now is not just whether the tools exist. It is whether organizations can build the people, data strategies, and workflows needed to use them well.
That is the real opportunity, and the real test, for the new Synopsys.
Converge 2026 became the first real stage for the new Synopsys.
That message came through clearly as the company used its first major post-acquisition gathering to argue that the future of engineering will no longer be built in silos. In the AI era, the old boundaries between chip design, software development, system simulation, packaging, and physical behavior are collapsing. In their place is a more connected engineering stack built to handle what Synopsys President and CEO Sassine Ghazi called “the era of pervasive intelligence where AI is infused everywhere.”
Ghazi called 2026 “the year one of the new company,” a line that framed Converge as the first full public expression of what the Synopsys-Ansys combination is supposed to mean in practice. The company is still very much talking about EDA, verification, and IP, but it is now making a larger claim: that it can help customers engineer the next generation of intelligent systems from silicon to systems.
This is no longer just a story about designing chips faster. It is a story about building products that are increasingly software defined, physically complex, AI-enabled, and deeply dependent on coordination across multiple engineering domains.
Ghazi gave that vision a memorable turn of phrase when he said that as physical AI advances, “bits will inhabit and command the atoms.” It sounds dramatic, but it neatly captured the company’s larger point. The next wave of intelligent systems will not live only in data centers or on phones. They will increasingly operate in the physical world, which means engineering teams have to account for much more than logic, layout, and software alone. They have to account for thermals, mechanical stress, power delivery, signal integrity, materials, reliability, and environment, often all at once.
That broader shift was echoed in Synopsys’ technical keynote by Shankar Krishnamoorthy, the company’s chief product development officer, who described the market pressure now bearing down on silicon teams.
“The clock of our semiconductor industry has changed,” he said. “We are now on a one-year clock.”
In AI infrastructure, he argued, there is “absolutely no way to compromise” across performance, velocity, and quality. Teams are expected to deliver generation-over-generation gains faster than Moore’s Law can naturally provide, while also shipping increasingly complex devices that have to work right the first time.
Why AI is Forcing Engineering to Converge
.png)
One of the most valuable sessions of the day came from the executive panel moderated by Allyson Klein, where leaders from across industries put real-world texture behind Synopsys’ argument. Their examples made clear that the pressure to converge engineering disciplines is already here.
If there was one repeated theme across the panel, it was the need to move learning earlier in the process. Todd Citron put it simply:
“A lot of this is about moving things left,” he said.
He described the shift not just as a matter of performance optimization, but as a way to bring lifecycle considerations into the design process much sooner.
“Not just optimizing the design for performance,” he said, “but being able to optimize the design over the life cycle by virtue of having that digital twin.”
That idea came up again and again. Panelists described a world in which software updates, new model architectures, and customer expectations are all moving too fast for traditional handoffs between engineering disciplines. Ravi Subramanian, the company’s chief product management officer, boiled it down to three drivers: products are becoming software defined and therefore “have to be silicon powered,” the pace of innovation is accelerating, and the historical “build it, test it, break it” model is simply too expensive and too slow.
Where the Silicon-to-Systems Case Got Real
One of the sharpest observations came when Subramanian recalled a customer telling him, “I don’t have shift, I have no more left to shift.”
Everyone in engineering wants to “shift left,” but the panel made clear that doing so at scale is not just a tooling problem. It is a people problem, a workflow problem, and a modeling problem. Companies may understand the need to simulate more, validate earlier, and co-design across domains, but many are still building the capability required to make that practical.
“Shift left” Meets Real-World Limits
That tension was especially clear in the discussion of multiphysics. Synopsys spent much of the event arguing that effects once treated as downstream concerns now have to be considered much earlier in the process, especially in AI superchips and advanced packaging. The panel gave that argument some real-world proof.
“We’re going from building a one-story house now to building a multi-story tower,” Subramanian said, referring to the chip industry’s move toward stacked and heterogeneous systems.
In autonomous vehicles, Mercedes-Benz VP of R&D Sundararajan Ramalingam described the challenge as even broader.
“Autonomous driving is one of the most complex systems that we have within the automotive,” he said.
He ticked through optic physics, thermal physics, sonar physics, electromagnetism, and voltage fluctuations, then cut to the heart of the matter:
“There is no way on the planet that we will be able to test it on the road against all corner cases,” he said.
Jean Boufarhat, VP and head of silicon, Reality Labs at Meta, brought the same point into consumer hardware. Building smart glasses, he said, means working across “material science,” “optical and display integration,” sensor technologies, audio, hearing, cameras, and battery constraints, all inside a form factor that a person would actually want to wear all day. In that context, co-design stops sounding like strategy jargon and starts sounding like survival.
Multiphysics Moves to the Front of the Process
Perhaps the strongest moment of the panel came from a concrete automotive story. Ramalingam described how a radar system had performed correctly in supplier testing and again in development vehicles. Everything looked release-ready. Then, late in the process, teams started seeing false braking events. Software was the first suspect. After deeper analysis, the issue turned out not to be the code at all, but a vehicle variant with metallic paint in a specific bumper design. The metallic particles were interfering with the radar’s behavior in ways that had not appeared earlier.
“Why could not we catch this in simulation?” he said.
That question landed harder than any product slide because it made the stakes obvious. It was not a theoretical argument for better tools. It was a real example of why modern systems have to be modeled more completely and much earlier. In a software-defined product, a physical material choice can quietly undermine sensor performance, trigger software-level consequences, and create a customer-facing problem late in the release cycle. That is the kind of systems problem Synopsys is trying to position itself around.
Its flagship answer at Converge was Multiphysics Fusion, the biggest technical story of the event. Ghazi described the value of the technology as “going from an over-designing to co-designing.” Krishnamoorthy went further, explaining that Synopsys is natively integrating engines for IR drop, thermal, and stress analysis into design and signoff tools like Fusion Compiler, PrimeTime, PrimeClosure, and 3DIC Compiler. The point is not merely to add more analysis. It is to pull that analysis into the mainline design loop earlier enough to reduce late-stage iteration, excess guard bands, and unnecessary performance or power sacrifices.
Krishnamoorthy described the benefit plainly.
“Much, much fewer iterations at the very end of the flow,” he said. He also argued that tighter native integration helps save PPA “because you are not margining so much in your design.” That matters in a market where advanced packaging, HBM integration, and system-level constraints are becoming central to AI infrastructure. Intel reinforced the point in a video appearance, saying, “Multi-physics analysis is critical to enabling advanced packaging,” especially as thermal, structural, and electromagnetic effects become more dominant in the angstrom era.
Digital Twins Expand Beyond the Physical Product
Converge’s second major product thread, the Electronic Digital Twin Platform, also aligned closely with what panelists discussed. Synopsys is trying to expand the meaning of a digital twin beyond the structure of a product and the environment around it. As Ghazi put it, modern products increasingly need a digital twin of the electronics, too.
For automotive in particular, the company described the platform as “the operating system that you need in order to design a digital twin for an autonomous vehicle.”
That is a bold framing, but the logic is clear. If software-defined products are going to evolve after shipment, and if system behavior increasingly depends on interaction among electronics, code, physical constraints, and environmental conditions, then development teams need something more than isolated simulations and physical benches. They need a platform that lets partners, silicon models, software-in-the-loop systems, and test environments come together in a reusable way.
The discussion of iteration and lifecycle learning gave that idea more weight. Citron argued that a digital twin can help teams optimize not only for performance, but also for sustainment and maintainability over time. In industries where products remain in the field for years, that becomes a strategic advantage, not just a design convenience.
Verification has to Evolve with the Workload
Verification was another area where Synopsys tried to stretch beyond a conventional EDA story. Across the keynotes, the company repeatedly described the burden of validating AI-era silicon and systems, with Krishnamoorthy estimating that teams may need “quadrillion” verification or validation cycles to gain enough confidence in increasingly complex software and hardware stacks. Synopsys’ answer is software-defined hardware-assisted verification: improving capacity, performance, and debugging through software innovation layered onto HAV systems, rather than relying only on hardware refreshes.
The most forward-looking part of Converge may have been the discussion around AI and engineering workflows. Synopsys has clearly moved beyond talking about AI only as an optimizer or assistant. Its road map now centers on more agentic workflows, and its executives used the event to argue that the next step is not just smarter tools, but a different operating model for engineering teams.
That theme surfaced forcefully on Allyson’s panel. Boufarhat described the shift as moving from “human in the loop” to “a person on the loop, an orchestrator rather than somebody who is actually tinkering and building every step along the way.”
Citron pushed the conversation deeper. If machines can iterate much faster than humans, he asked, then what becomes the role of the person, and “what does good look like?” That is a bigger question than it first appears. It suggests that engineering value may increasingly shift from manually executing every step to defining objectives, judging outcomes, and orchestrating more automated flows.
Subramanian added another layer.
“AI can’t be spoken about without the data strategy in the company,” he said.
Synopsys can build agents, assistants, and orchestration frameworks, but companies still need the underlying data, relationships, and institutional knowledge that make those systems useful. Without that foundation, AI becomes a demo. With it, AI starts to change how engineering work actually gets done.
The Bigger Test for the New Synopsys
The panelists were not naïve about the challenge. They talked about training, intuition, and the risk that earlier-career engineers may not develop deep systems judgment if more work is automated too early. They talked about needing deterministic tools in regulated domains like automotive. They talked about the need for courage in adopting workflows that will inevitably change roles and expectations.
That last point echoed Subramanian’s closing thought:
“Everything will happen faster than we think it’ll happen, and everything will be more disruptive,” he said.
Synopsys came to the event with a large portfolio story to tell, talking about Multiphysics Fusion, digital twins, software-defined verification, interface IP, AI-assisted design, and agentic workflows. But the real takeaway was not the number of announcements. It was the coherence of the larger argument. Synopsys is betting that AI-era products will force engineering teams to work across silicon, software, physics, and systems much earlier and much more tightly than before.
The pressure is already visible in autonomous driving, aerospace, consumer devices, and AI silicon. The challenge now is not just whether the tools exist. It is whether organizations can build the people, data strategies, and workflows needed to use them well.
That is the real opportunity, and the real test, for the new Synopsys.



