A powerful AI model that appeared anonymously and sparked widespread speculation across the developer community has now been linked back to Xiaomi.
The model, called Hunter Alpha, had fueled rumors that Chinese startup DeepSeek was quietly testing its next-generation system.
The confirmation ends days of industry guessing while offering a clearer look at how AI developers are testing new models in public. It also highlights growing activity among Chinese tech companies developing AI systems designed to handle more complex tasks with less human input.
Xiaomi confirms Hunter Alpha origins
According to Reuters, Xiaomi revealed that Hunter Alpha is an internal test version of its MiMo-V2-Pro model, developed by its AI team MiMo and led by former DeepSeek researcher Luo Fuli. The model first appeared on OpenRouter on March 11 without attribution, quickly gaining attention for its capabilities and free access.
Reuters noted that the model was labeled a âstealthâ release and rapidly climbed platform rankings, surpassing one trillion tokens in usage. Xiaomi later said the system is intended to serve as the core model for AI agents that can perform multi-step tasks with reduced human input.
âI call this a quiet ambush, not because we planned it, but because the shift from chat to agent paradigm happened so fast, even we barely believed it,â Luo wrote on X.
The company said the model will integrate with several agent frameworks and offer limited free access to developers worldwide.
Specs and behavior fueled DeepSeek rumors
Before Xiaomiâs confirmation, developers widely speculated that Hunter Alpha was an early version of DeepSeekâs upcoming V4 model. The speculation was driven by similarities in both technical specifications and behavior.
EconoTimes reported that the model is described as having roughly one trillion parameters and a context window of up to one million tokens, a combination typically associated with high-end systems.
Mashable and Reuters also highlighted that the model shared a May 2025 knowledge cutoff with DeepSeekâs previous releases and declined to identify its creator when asked.
âReasoning style is hard to disguise and tends to reflect how a model was trained,â AI engineer Daniel Dewhurst told Reuters.
At the same time, some experts questioned the connection. Independent tester Umur Ozkul said the systemâs architecture did not fully align with DeepSeekâs known models.
Stealth AI testing gains traction
The Hunter Alpha episode illustrates a broader trend in AI development, in which companies release unnamed models to gather real-world feedback before an official launch.
Reuters explained that platforms like OpenRouter allow developers to test models at scale while withholding attribution. This approach can help companies gather unbiased usage data, though it can also lead to confusion about a modelâs origin.
Hunter Alpha saw rapid adoption after launch, processing large volumes of tokens within days and ranking among the most-used models on the platform, according to the publication.
The incident also emphasizes how quickly new AI systems can gain traction, even without clear branding, as developers experiment with models that promise improved reasoning and long-context capabilities.
Read more to learn how to secure AI agents across enterprise systems and address risks like prompt injection, plugins, and persistent memory.
Read the full article here