Meta Pauses Work With Mercor After LiteLLM-Linked Data Breach

News Room

Meta has paused work with Mercor after a security breach at the AI training startup, according to WIRED.

Mercor has also confirmed it was affected by a broader supply chain attack involving the open-source project LiteLLM.

Mercor sits in a sensitive part of the AI ecosystem. The startup connects major AI companies with contractors and domain experts for model training and evaluation, so a breach there raises questions about the vendor layer that supports AI development.

How the breach reached Mercor

Mercor told TechCrunch that it was “one of thousands of companies” affected by the LiteLLM compromise. The company said it moved promptly to contain and remediate the incident and brought in third-party forensics experts. TechCrunch also reported that Mercor works with companies including OpenAI and Anthropic and says it facilitates more than $2 million in daily payouts.

SecurityWeek reported that attackers used compromised maintainer credentials to publish malicious LiteLLM versions 1.82.7 and 1.82.8 to PyPI. Those versions were available for roughly 40 minutes, which is brief on a clock but long enough to create downstream exposure for widely used software.

WIRED’s report said Meta’s pause is indefinite while it investigates, and that other major AI labs are reevaluating their work with Mercor. OpenAI, according to that report, has not stopped current projects with Mercor but is investigating whether proprietary training data may have been exposed.

Why this breach matters for AI vendors

Mercor has not confirmed the full scope of any exposed data. SecurityWeek reported that Lapsus$ claimed to have stolen more than 4TB of data, but Mercor had not validated that claim. For now, the strongest confirmed facts are narrower: Mercor was hit through the LiteLLM incident, it contained and remediated the event, and at least one major client paused work.

That is enough to make this bigger than a single startup’s breach. Mercor operates in the workflow layer between AI labs and the human contractors used to build, label, and evaluate model outputs. When that layer is compromised through a common dependency, the fallout can reach customers even if their own internal systems were never directly breached.

The pattern is familiar across cyber incidents. Trusted software and operational intermediaries can become the fastest route to disruption, a dynamic also evident in the Hasbro cyberattack, which knocked systems offline for weeks.

Meta has not commented publicly on the pause. Until Mercor’s forensic review answers the open questions about exposure and exfiltration, the breach will remain a warning about how much AI risk lies beneath the model layer.

Also read: Last week’s AI breakthroughs and security incidents captured how innovation and security risk collided across tech.

Read the full article here

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *