What can you do if you’re a multinational technology company that’s facing increasing restrictions on access to next-gen semiconductor technology?
If you’re Alibaba, the answer is simple: you stick it to those who are trying to keep you down by creating your own chips to power new, state-of-the-art data centers while reinvigorating the domestic workforce at the same time.
Breaking ground on the newest data center
Having recently teamed up with China Telecom, Alibaba officially flipped the switch on its latest data center in early April. The move comes only a month after a large-scale cluster of Huawei Ascend 910C AI chips was put into commission in Shenzhen.
The new data center, located in Shaoguan, features a backbone of 10,000 Zhenwu semiconductors — all of which were manufactured domestically by Alibaba in direct response to US export restrictions on AI chips manufactured by Nvidia and others.
Alibaba’s Zhenwu chips are specifically designed to support sophisticated AI models capable of running hundreds of billions of parameters. While Alibaba’s T-Head division is the driving force behind the manufacturing and distribution of the Zhenwu chips, the upcoming data center is owned and operated by China Telecom.
If everything goes as planned, Alibaba and China Telecom hope to expand their capacity from 10,000 to 100,000 chips in the near future. The new data center will serve a variety of industries and sectors in the coming years, including healthcare, materials science, government, and more.
Alibaba’s move into custom silicon isn’t just reactive. It gives the company tighter control over performance, costs, and the scaling of its AI systems, much like other tech giants that design chips in-house. At the same time, it reflects a broader shift toward self-reliance, as companies in China build domestic alternatives to navigate growing restrictions in global semiconductor supply chains.
Embracing the AI boom
But the latest data center isn’t Alibaba’s first foray into the world of AI. That credit goes to a family of large language models (LLMs), known as Qwen, which are currently being developed and maintained by the team at Alibaba Cloud.
Although Qwen’s initial beta took place in April 2023, it wasn’t made available to the general public until September of that same year. Based on the Llama architecture developed by Meta AI, Qwen has received regular updates ever since.
The latest version of Alibaba’s flagship LLM is Qwen 3.6-Plus. It was purpose-built to excel at agentic AI, coding, and retrieval-augmented generation.
Ramping up the AI race both domestically and internationally
In the face of increasing export restrictions from the United States, AI companies in China have no other choice but to embrace domestic chip development and manufacturing. Not only does this put them directly at odds with AI companies based in the US, but they’re also facing increased competition from companies based in China.
Also read: Nvidia-backed Reflection AI is planning a multibillion-dollar data center in South Korea as the US pushes open AI infrastructure to counter Chinese rivals.
Read the full article here