Over the weekend, Meta dropped two new Llama 4 models: a smaller model named Scout, and Maverick, a mid-size model that the company claims can beat GPT-4o and Gemini 2.0 Flash âacross a broad range of widely reported benchmarks.â
Maverick quickly secured the number-two spot on LMArena, the AI benchmark site where humans compare outputs from different systems and vote on the best one. In Metaâs press release, the company highlighted Maverickâs ELO score of 1417, which placed it above OpenAIâs 4o and just under Gemini 2.5 Pro. (A higher ELO score means the model wins more often in the arena when going head-to-head with competitors.)
The achievement seemed to position Metaâs open-weight Llama 4 as a serious challenger to the state-of-the-art, closed models from OpenAI, Anthropic, and Google. Then, AI researchers digging through Metaâs documentation discovered something unusual.
In fine print, Meta acknowledges that the version of Maverick tested on LMArena isnât the same as whatâs available to the public. According to Metaâs own materials, it deployed an âexperimental chat versionâ of Maverick to LMArena that was specifically âoptimized for conversationality,â TechCrunch first reported.
âMetaâs interpretation of our policy did not match what we expect from model providers,â LMArena posted on X two days after the modelâs release. âMeta should have made it clearer that âLlama-4-Maverick-03-26-Experimentalâ was a customized model to optimize for human preference. As a result of that, we are updating our leaderboard policies to reinforce our commitment to fair, reproducible evaluations so this confusion doesnât occur in the future.â
A spokesperson for Meta didnât have a response to LMArenaâs statement in time for publication.
While what Meta did with Maverick isnât explicitly against LMArenaâs rules, the site has shared concerns about gaming the system and taken steps to âprevent overfitting and benchmark leakage.â When companies can submit specially-tuned versions of their models for testing while releasing different versions to the public, benchmark rankings like LMArena become less meaningful as indicators of real-world performance.
âItâs the most widely respected general benchmark because all of the other ones suck,â independent AI researcher Simon Willison tells The Verge. âWhen Llama 4 came out, the fact that it came second in the arena, just after Gemini 2.5 Pro â that really impressed me, and Iâm kicking myself for not reading the small print.â
Shortly after Meta released Maverick and Scout, the AI community started talking about a rumor that Meta had also trained its Llama 4 models to perform better on benchmarks while hiding their real limitations. VP of generative AI at Meta, Ahmad Al-Dahle, addressed the accusations in a post on X: âWeâve also heard claims that we trained on test sets — thatâs simply not true and we would never do that. Our best understanding is that the variable quality people are seeing is due to needing to stabilize implementations.â
âItâs a very confusing release generally.â
Some also noticed that Llama 4 was released at an odd time. Saturday doesnât tend to be when big AI news drops. After someone on Threads asked why Llama 4 was released over the weekend, Meta CEO Mark Zuckerberg replied: âThatâs when it was ready.â
âItâs a very confusing release generally,â says Willison, who closely follows and documents AI models. âThe model score that we got there is completely worthless to me. I canât even use the model that they got a high score on.â
Metaâs path to releasing Llama 4 wasnât exactly smooth. According to a recent report from The Information, the company repeatedly pushed back the launch due to the model failing to meet internal expectations. Those expectations are especially high after DeepSeek, an open-source AI startup from China, released an open-weight model that generated a ton of buzz.
Ultimately, using an optimized model in LMArena puts developers in a difficult position. When selecting models like Llama 4 for their applications, they naturally look to benchmarks for guidance. But as is the case for Maverick, those benchmarks can reflect capabilities that arenât actually available in the models that the public can access.
As AI development accelerates, this episode shows how benchmarks are becoming battlegrounds. It also shows how Meta is eager to be seen as an AI leader, even if that means gaming the system.
Read the full article here