When OpenAI launched Frontier in February, the announcement was framed as a platform for enterprise AI agents. What it actually signalled was a direct challenge to the revenue architecture that has underpinned the software industry for the better part of two decades.
Frontier is designed to act as a semantic layer across an organisation’s existing systems, connecting data warehouses, CRM platforms, ticketing tools, and internal applications so that AI agents can operate with the same business context a human employee would have. OpenAI describes these agents as “AI coworkers” that can be onboarded, assigned identities, granted permissions, and reviewed for performance.
Early customers include Uber, State Farm, Intuit, and Thermo Fisher Scientific. The commercial ambition behind the platform is not subtle. OpenAI CFO Sarah Friar has stated that enterprise customers currently account for roughly 40% of the company’s revenue, and she aims to increase this figure to closer to 50% by year-end. Frontier is the vehicle.
What Frontier actually does to enterprise workflows
The case for Frontier rests on a problem that CIOs have described consistently through 2025 and into this year: agents deployed in isolation add complexity rather than remove it. Each new agent becomes a point of integration, requiring its own data connections and governance controls, and the result is fragmentation at scale.
OpenAI’s answer is a shared business context. Rather than each agent building its own understanding of how an organisation works, Frontier provides a centralised layer that all agents can reference. Fidji Simo, OpenAI’s CEO of Applications, put it plainly during the launch briefing, drawing on her time running Instacart.
“We spent months integrating each of the ones that we selected. We didn’t even get what we actually wanted, because each tool was good for one use case, but they weren’t integrated or talking to one another, so we were just reinforcing silos upon silos.”
The results OpenAI cites from early deployments are notable. A global investment firm using Frontier agents across its sales process freed up more than 90% of salesperson time previously spent on administrative tasks. A technology customer reported saving 1,500 hours a month in product development. At a major manufacturer, agents compressed a production optimisation process from six weeks to a single day.
Frontier is also deliberately open. It manages agents built by OpenAI, agents built in-house by enterprise teams, and agents from third-party providers, including Google, Microsoft, and Anthropic. That openness is both a design principle and a positioning move: it makes Frontier harder to dismiss as a vendor lock-in play, while expanding the surface area it can govern.
The seat-licence problem nobody wants to say out loud
The deeper concern for incumbents is structural. The per-seat licence model that has made SaaS enormously profitable assumes that software usage maps to headcount. If an AI agent handles the workflow that previously required a human employee logging into Salesforce, the justification for that seat licence weakens. Fortune described it directly: the fear in the market is that platforms like Frontier will make SaaS software “invisible” and consequently less valuable.
Salesforce’s stock has declined more than 27% so far this year, a fall analysts have attributed more to agentic AI disruption fears than to any weakness in its underlying financials. The company’s Q4 FY2026 results were solid. Revenue reached $11.2 billion in the quarter, Agentforce’s annual recurring revenue hit $800 million, and the company closed 29,000 Agentforce deals.
The stock still fell after hours, on guidance that came in below Wall Street’s expectations.
The incumbents are not standing still. Salesforce has introduced what it calls the Agentic Enterprise License Agreement, a fixed-price, all-you-can-eat model for Agentforce that attempts to make consumption more predictable for enterprise buyers.
ServiceNow has moved to consumption-based pricing for some of its AI agent offerings, and in January signed a multiyear agreement with OpenAI to embed frontier model capabilities directly into its platform. Microsoft has introduced consumption-based pricing alongside its per-user model for Copilot Studio.
The pricing pivot is significant. It signals that these companies understand the seat-licence model cannot survive agentic AI unchanged. The question is whether repricing is enough or whether the architecture itself needs to change.
Two bets on where the intelligence layer should sit
The strategic divide in enterprise AI right now runs along a single fault line: should AI agents live inside systems of record, or above them? Salesforce and ServiceNow are betting on the embedded model. They argue that agents are most effective when they sit closest to the data, and that CIOs will trust governance and compliance controls more readily from vendors already managing their workflows.
Marc Benioff, CEO of Salesforce, has described Agentforce as the “operating system for the agentic enterprise.” ServiceNow positions its AI Control Tower as a centralised governance layer for all agents, regardless of where they originate.
OpenAI, and to a similar degree, Anthropic with Claude Cowork, is betting on the overlay model. Frontier sits above existing systems, using open standards to connect them rather than replacing them. The pitch is that enterprises should not have to replatform to get production-grade agents running across their operations.
Both arguments have merit, and enterprises evaluating these platforms will find genuine trade-offs. The embedded approach offers tighter data control and faster time to value within a known ecosystem. The overlay approach offers flexibility and avoids the problem of agents that can only see one vendor’s data.
What the incumbents have that OpenAI does not is decades of institutional trust and existing contracts. What OpenAI has is the model capability advantage and an increasingly credible argument that it can run the intelligence layer across the whole enterprise, not just one product family.
What CIOs are actually deciding
Frontier is currently available to a limited set of customers, with broader availability expected over the coming months. Pricing has not been disclosed publicly, with OpenAI directing interested organisations to its enterprise sales team.
For CIOs, the practical decision is not yet binary. Most large enterprises run Salesforce, ServiceNow, and Microsoft infrastructure simultaneously. The immediate question is whether Frontier becomes an orchestration layer that connects those systems, or a competitive platform that starts displacing them.
OpenAI’s chief revenue officer, Denise Dresser, offered what is probably the most honest summary of where enterprise AI agents stand right now. “What’s really missing still for most companies is just a simple way to unleash the power of agents as teammates that can operate inside the business without the need to rework everything underneath.”
That gap is exactly what every platform in this space claims to close. The difference with Frontier is that the company making the claim now has the enterprise relationships, the production deployments, and the model capability to back it up. The SaaS incumbents have a head start on trust and data. Whether that proves sufficient is the central question for enterprise software through the rest of 2026.
(Photo by Austin Distel)
See also: OpenAI’s enterprise push: The hidden story behind AI’s sales race

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Read the full article here