Your coworker’s AI-built app might be leaking company secrets

News Room

AI coding tools have made it ridiculously easy to build a web app, and it only takes a few minutes to set up now. This ease has lowered the barrier to app development, which is causing a new set of issues. So what happens when these AI-made apps go live without anyone checking the locks? You get secrets spilling out all over the internet.

A WIRED report highlights a major security problem around so-called “vibe-coded” apps, which are built using AI development platforms such as Lovable, Replit, Base44, and Netlify.

Why this is a bigger issue than you think

Security researcher Dor Zvi and his team at RedAccess analyzed thousands of these apps and found more than 5,000 that had little to no security or authentication. Most of these apps could practically be accessed by anyone who found the ‘right’ URL. A few of these had only minimal barriers, allowing visitors to sign in with any email address. Nearly half of these exposed apps appeared to contain sensitive data like medical information, financial records, corporate presentations, strategy documents, and customer chatbot logs, said Zvi.

The investigation reportedly also revealed hospital work assignments with personally identifiable information, ad purchasing data, market presentation strategies, sales information, and even customer conversations with their names and contact details. Several of these apps were still online, although WIRED couldn’t verify whether all the data it reviewed was real or sensitive.

How vibe coding has become dangerous in IT

This story isn’t just limited to one batch of sloppy AI apps. These tools allow people who may not have software engineering or security experience to build and publish apps quickly, which are often outside normal IT approval processes. So a member of the marketing team, operations worker, or founder can create a tool for internal use, connect it to real data, and accidentally leave it open to the web.

Zvi compared it to the old wave of exposed Amazon S3 buckets, where misconfigurations led companies to leak sensitive data at a massive scale. Security researcher Joel Margolis told WIRED that AI coding tools only do what’s asked of them. So if a user does not ask for security explicitly, the app may not be secure by default.

What did the companies say?

Replit CEO Amjad Masad wrote on X that some users had published apps on the open web that should have been private, adding that public apps being accessible online is expected behavior. Meanwhile, Lovable said it takes exposed data and phishing reports seriously and is investigating. Base44 parent company Wix stated that its platform provides security and visibility controls, arguing that public access reflects user configuration choices rather than a platform vulnerability.

This is a reality check for anyone treating vibe coding like a fast track to startup success. AI-generated apps can move quickly, but that speed comes with real trade-offs. From weak oversight to hidden vulnerabilities, AI-built apps can become a serious problem once a product is in users’ hands.

Read the full article here

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *