Microsoft’s plan to fix the web with AI has already hit an embarrassing security flaw

News Room

Researchers have already found a critical vulnerability in the new NLWeb protocol Microsoft made a big deal about just just a few months ago at Build. It’s a protocol that’s supposed to be “HTML for the Agentic Web,” offering ChatGPT-like search to any website or app. Discovery of the embarrassing security flaw comes in the early stages of Microsoft deploying NLWeb with customers like Shopify, Snowlake, and TripAdvisor.

The flaw allows any remote users to read sensitive files, including system configuration files and even OpenAI or Gemini API keys. What’s worse is that it’s a classic path traversal flaw, meaning it’s as easy to exploit as visiting a malformed URL. Microsoft has patched the flaw, but it raises questions about how something as basic as this wasn’t picked up in Microsoft’s big new focus on security.

“This case study serves as a critical reminder that as we build new AI-powered systems, we must re-evaluate the impact of classic vulnerabilities, which now have the potential to compromise not just servers, but the ‘brains’ of AI agents themselves,” says Aonan Guan, one of the security researchers (alongside Lei Wang) that reported the flaw to Microsoft. Guan is a senior cloud security engineer at Wyze (yes, that Wyze) but this research was conducted independently.

Guan and Wang reported the flaw to Microsoft on May 28th, just weeks after NLWeb was unveiled. Microsoft issued a fix on July 1st, but has not issued a CVE for the issue — an industry standard for classifying vulnerabilities. The security researchers have been pushing Microsoft to issue a CVE, but the company has been reluctant to do so. A CVE would alert more people to the fix and allow people to track it more closely, even if NLWeb isn’t widely used yet.

“This issue was responsibly reported and we have updated the open-source repository,” says Microsoft spokesperson Ben Hope, in a statement to The Verge. “Microsoft does not use the impacted code in any of our products. Customers using the repository are automatically protected.”

Guan says NLWeb users “must pull and vend a new build version to eliminate the flaw,” otherwise any public-facing NLWeb deployment “remains vulnerable to unauthenticated reading of .env files containing API keys.”

While leaking an .env file in a web application is serious enough, Guan argues it’s “catastrophic” for an AI agent. “These files contain API keys for LLMs like GPT-4, which are the agent’s cognitive engine,” says Guan. “An attacker doesn’t just steal a credential; they steal the agent’s ability to think, reason, and act, potentially leading to massive financial loss from API abuse or the creation of a malicious clone.”

Microsoft is also pushing ahead with native support for Model Context Protocol (MCP) in Windows, all while security researchers have warned of the risks of MCP in recent months. If the NLWeb flaw is anything to go by, Microsoft will need to take an extra careful approach of balancing the speed of rolling out new AI features versus sticking to security being the number one priority.

Read the full article here

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *