The latest MCP spec update fortifies enterprise infrastructure with tighter security, moving AI agents from pilot to production.
Marking its first year, the Anthropic-created open-source project released a revised spec this week aimed at the operational headaches keeping generative AI agents stuck in pilot mode. Backed by Amazon Web Services (AWS), Microsoft, and Google Cloud, the update adds support for long-running workflows and tighter security controls.
The market is drifting away from fragile, bespoke integrations. For enterprises, this is a chance to deploy agentic AI that can read and write to corporate data stores without incurring massive technical debt.
MCP advances from ‘developer curiosity’ to practical infrastructure
The narrative has shifted from experimental chatbots to structural integration. Since September, the registry has expanded by 407 percent, now housing nearly two thousand servers.
“A year on from Anthropic’s launch of the Model Context Protocol, MCP has gone from a developer curiosity to a practical way to connect AI to the systems where work and data live,” says Satyajith Mundakkal, Global CTO at Hexaware, following this latest spec update.
Microsoft has already “signaled the shift by adding native MCP support to Windows 11,” effectively moving the standard directly into the operating system layer.
This software standardisation arrives alongside an aggressive hardware scale-up. Mundakkal highlights the “unprecedented infrastructure build-out,” citing OpenAI’s multi-gigawatt ‘Stargate’ programme. “These are clear signals that AI capabilities, and the data they depend on, are scaling fast,” he says.
MCP is the plumbing feeding these massive compute resources. As Mundakkal puts it: “AI is only as good as the data it can reach safely.”
Until now, hooking an LLM into a database was mostly synchronous. That works for a chatbot checking the weather, but it fails when migrating a codebase or analysing healthcare records.
The new ‘Tasks’ feature changes this (SEP-1686). It gives servers a standard way to track work, allowing clients to poll for status or cancel jobs if things go sideways. Ops teams automating infrastructure migration need agents that can run for hours without timing out. Supporting states like working or input_required finally brings resilience to agentic workflows.
MCP spec update improves security
For CISOs especially, AI agents often look like a massive and uncontrolled attack surface. The risks are already visible; “security researchers even found approximately 1,800 MCP servers exposed on the public internet by mid-2025,” implying that private infrastructure adoption is significantly wider.
“Done poorly,” Mundakkal warns, “[MCP] becomes integration sprawl and a bigger attack surface.”
To address this, the maintainers tackled the friction of Dynamic Client Registration (DCR). The fix is URL-based client registration (SEP-991), where clients provide a unique ID pointing to a self-managed metadata document to cut the admin bottleneck.
Then there’s ‘URL Mode Elicitation’ (SEP-1036). It allows a server – handling payments, for instance – to bounce a user to a secure browser window for credentials. The agent never sees the password; it just gets the token. It keeps the core credentials isolated, a non-negotiable for PCI compliance.
Harish Peri, SVP at Okta, believes this brings the “necessary oversight and access control to build a secure and open AI ecosystem.”
One feature as part of the spec update for MCP infrastructure has somewhat flown under the radar: ‘Sampling with Tools’ (SEP-1577). Servers used to be passive data fetchers; now they can run their own loops using the client’s tokens. Imagine a “research server” spawning sub-agents to scour documents and synthesise a report. No custom client code required—it simply moves the reasoning closer to the data.
However, wiring these connections is only step one. Mayur Upadhyaya, CEO at APIContext, argues that “the first year of MCP adoption has shown that enterprise AI doesn’t begin with rewrites, it begins with exposure.”
But visibility is the next hurdle. “The next wave will be about visibility: enterprises will need to monitor MCP uptime and validate authentication flows just as rigorously as they monitor APIs today,” Upadhyaya explains.
MCP’s roadmap reflects this, with updates targeting better “reliability and observability” for debugging. If you treat MCP servers as “set and forget,” you’re asking for trouble. Mundakkal agrees, noting the lesson from year one is to “pair MCP with strong identity, RBAC, and observability from day one.”
Star-studded industry line-up adopting MCP for infrastructure
A protocol is only as good as who uses it. In a year since the original spec’s release, MCP hit nearly two thousand servers. Microsoft is using it to bridge GitHub, Azure, and M365. AWS is baking it into Bedrock. Google Cloud supports it across Gemini.
This reduces vendor lock-in. A Postgres connector built for MCP should theoretically work across Gemini, ChatGPT, or an internal Anthropic agent without a rewrite.
The “plumbing” phase of Generative AI is settling down, and open standards are winning the debate on connectivity. Technology leaders should look to audit internal APIs for MCP readiness – focusing on exposure rather than rewrites – and verify that the new URL-based registration fits current IAM frameworks.
Monitoring protocols must also be established immediately. While the latest MCP spec update is backward compatible with existing infrastructure; the new features are the only way to bring agents into regulated, mission-relevant workflows and ensure security.
See also: Adversarial learning breakthrough enables real-time AI security
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Read the full article here