The room for Nvidia’s Open Model Super Panel at San Jose Civic was packed well before Jensen Huang really got going.
It felt less like a normal conference panel and more like one of those sessions where the industry starts saying the next platform shift out loud. Nvidia listed the session as “Open Models: Where We Are and Where We’re Headed,” moderated by Huang and held on March 18 during GTC 2026.
But despite the title, the most interesting argument onstage was not really about open models.
It was about open agents.
The real story was the move from models to systems
Huang opened the session by trying to kill the most boring framing in AI: the idea that the market is cleanly split between proprietary labs and open challengers. His point was broader than that. AI is not a single model, a single product, or a single winner-take-all category. It is a stack, a system, and increasingly a combination of many different model types working together.
“Proprietary versus open is not a thing. It’s proprietary and open,” Huang said. “A.I. is a system of models and systems of a lot of other things.”
That was the throughline of the discussion.
Yes, the panel covered open models as infrastructure. Yes, it touched on why open systems widen access and why smaller players may create some of the most important specialized breakthroughs. But the stronger consensus was that the center of gravity is moving up the stack.
Models matter. Open models matter a lot. But what increasingly matters more is the system wrapped around them: orchestration, memory, tools, identity, governance, and runtime.
That is why the panel landed as such a strong case for open agents.
Aravind Srinivas gave the clearest product abstraction
The sharpest product framing came from Aravind Srinivas, who described Perplexity Computer in a way that captured where the market seems to be heading. Instead of asking users to choose a model, route tasks manually, and stitch together their own workflows, the system should take the task and decide how to solve it.
“A.I. is not the model, it’s the system. It’s the computer,” Srinivas said. “Perplexity Computer is the idea that you should build the organizational system of everything that A.I. can do.”
That is a bigger idea than product branding.
It suggests the next useful abstraction layer in AI may not be a chatbot or even a single frontier model. It may be a computer for delegation: a system that knows which models to call, which tools to use, when open models are good enough, when closed models are worth using, and how to pull those pieces into one coherent workflow.
Srinivas also made it clear that the future is unlikely to be a simple ideological split between open and closed systems. Different models will serve different functions.
Harrison Chase made the case for the harness layer
If Srinivas provided the cleanest product abstraction, Harrison Chase provided the clearest builder abstraction.
His phrase, “harness engineering,” may have been one of the most important on the panel. Chase used it to describe everything around the model: which sub-agents are used, which skills are attached, how memory works, what tools are selected, and how the environment is configured for a specific domain or task.
“Harness engineering is everything around the model,” Chase said.
He made the point that when people are impressed by a polished AI product, they are often responding not just to the raw model quality but to the system surrounding it. That matters because it runs counter to one of the laziest ideas in AI discourse: that anything built around a model is “just a wrapper.”
Once models get good enough, the wrapper stops being a wrapper and starts becoming the operating system. The harness is where general intelligence becomes useful intelligence.
That also helps explain why routing and orchestration are starting to look like durable product layers. A useful reference point here is The Neuron’s write-up of OpenRouter. While not identical to what the panel discussed, it maps closely to the same underlying shift: value is moving into the layer that decides how intelligence gets assembled and deployed.
OpenClaw mattered less as a product than as a signal
OpenClaw hovered over the whole conversation even when the panel was not explicitly about it.
Huang framed it as a turning point, not just because it exists, but because it makes a new category legible. In the panel transcript, he described it as a big deal. In a separate GTC press Q&A, he went even further, calling it an inflection point for what comes after reasoning systems and arguing that it now needs enterprise-grade layers, including privacy, governance, security, and optimized runtimes.
“OpenClaw is a big deal,” Huang said, a point he reiterated throughout GTC.
The point is not that OpenClaw is the only product that matters.
The point is that it signals the conversation has shifted from answering to acting.
That is the more important category change. The panelists kept circling the same idea, even when they used slightly different language: AI systems are moving beyond responses and into execution across files, tools, workflows, and goals.
Michael Truell connected coding agents to the rest of the economy
Cursor CEO and Founder Michael Truell offered one of the cleanest bridges from coding agents to the rest of the economy. His argument was that coding was simply the first place this system style began working in a real, visible way. The same pattern is now spreading into other domains.
“What started working in coding last year … now, we’re going to all of these other domains,” Truell said.
That is a useful lens for understanding why this panel mattered.
Coding agents are the preview but not the overall endpoint.
The combination of models, files, CLIs, tool use, and rapid iteration made coding the first environment where agentic systems felt obviously real. If those same primitives spread outward into research, healthcare, legal workflows, operations, and back office work, then the real market is not “AI coding.” It is the much larger category of computer work being reinterpreted as agent work.
Mira Murati made the strongest case for why openness matters
Mira Murati, Founder and CEO of Thinking Machines, gave the strongest argument for open systems not as a cost-saving tactic but as infrastructure for innovation.
She pushed back on the idea that open models are somehow inherently second-tier, arguing that the gap people see today may be contingent rather than permanent. More importantly, she framed openness as a way to broaden access to research, experimentation, and meaningful technical contributions beyond the largest labs.
“There is nothing fundamentally different between an open and a closed model,” she said.
That moves the discussion beyond the usual “open is cheaper” talking point.
Murati’s argument makes open models sound more like scientific infrastructure: a way to expand the number of people who can build, test, specialize, and discover new applications. Even with some transcript noise, the shape of her point comes through clearly. Open systems widen the innovation surface area.
Jensen Huang’s framing tied it all together
Huang’s role on the panel was more than a moderator. He supplied the framing device that made the other comments cohere.
He argued that open models, taken together, are already enormous in aggregate and may become even more important as AI spreads across more domains, products, and industries. That is a useful way to think about the market. The future is too broad and too specialized to be served by a tiny number of monolithic systems alone.
The AI market is expanding into too many niches, workflows, and sectors for a single model strategy to dominate. And that is really the panel’s strongest takeaway.
Open models were the headline. Open agents were the case being made
Open models are becoming the raw material for specialized intelligence. Open agents are becoming the interface through which that intelligence acts. And the harness around both is becoming the layer where trust, customization, product value, and defensibility get built.
In other words, the debate is shifting from models to systems.
That is what made the panel feel bigger than a panel. It felt like the industry was trying to admit, in public, that the next AI platform may not belong to whichever lab builds the best single model.
It may belong to whoever builds the best open agent system on top of many of them.
Meanwhile, Nvidia is also backing a multibillion-dollar AI data center in South Korea as part of a broader push to expand open AI infrastructure and counter rising Chinese competition.
Read the full article here