Tech companies are adamant that the regulation of artificial intelligence in the E.U. is preventing its citizens from accessing the latest and greatest products. However, a number of civil society groups feel otherwise, maintaining that AI developers need to produce products that uphold their customersâ safety and privacy.
Some of tech giantsâ delayed launches in EU
There have been a number of instances where the launches of AI products in the E.U. have either been delayed or cancelled as a result of regulations. For instance, this week, Metaâs Llama 4 series of AI models was released everywhere except Europe. Its AI chatbots integrated into WhatsApp, Messenger, and Instagram only made it to the bloc 18 months after the U.S.
Similarly, Googleâs AI Overviews currently only appear in eight member states, having arrived nine months later than in the States, and both its Bard and Gemini models had delayed European releases. Apple Intelligence has only just become available in the E.U. with the release of iOS 18.4, after âregulatory uncertainties brought about by the Digital Markets Actâ held up its release in the region.
âIf certain companies cannot guarantee that their AI products respect the law, then consumers are not missing out; these are products that are simply not safe to be released on the E.U. market yet,â SĂ©bastien Pant, deputy head of communications at the European consumer organisation BEUC, told Euronews.
âIt is not for legislation to bend to new features rolled out by tech companies. It is instead for companies to make sure that new features, products or technologies comply with existing laws before they hit the EU market.â
SEE: EUâs AI Act: Europeâs New Rules for Artificial Intelligence
EU regulations push companies to build more privacy-conscious tools
E.U. legislation hasnât always excluded E.U. citizens from AI products; instead, it has often compelled tech companies to adapt and deliver better, more privacy-conscious solutions for them. For example:
- X agreed to permanently stop processing personal data from E.U. usersâ public posts to train its AI model Grok after it was taken to court by the Data Protection Commission.
- DeepSeek, the Chinese AI model, was banned in Italy over concerns about how it handled its citizensâ data.
- Last June, Meta delayed the training of its large language models on public content shared on Facebook and Instagram after EU regulators suggested it might need explicit consent from content owners, and it has still not resumed.
Kleanthi Sardeli, a data protection lawyer working with the advocacy group noyb, told Euronews that users generally donât anticipate their public posts being used to train AI models, yet thatâs precisely what many tech companies are doing, often with little regard for transparency. âThe right to data protection is a fundamental human right and it should be taken into account when designing and deploying AI tools.â
Google, Meta claim EU AI laws disadvantage citizens, but their revenue is also at stake
Google and Meta have openly criticised European regulation of AI, suggesting it will quash the regionâs innovation potential.
Last year, Google published a report that detailed how Europe lags behind other global superpowers when it comes to AI innovation. It found that only 34% of E.U. businesses used cloud computing technologies in 2022, a critical enabler for AI developments, which is vastly behind the European Commissionâs target of 75% by 2030. Europe also filed just 2% of global AI patents in 2022, while China and the U.S., the top two largest producers, filed 61% and 21% respectively.
The report placed much of the blame on E.U. regulations for the regionâs struggles to innovate in advanced technologies. âSince 2019, the EU has introduced over 100 pieces of legislation that impact the digital economy and society. Itâs not just the sheer number of regulations thatâs the challenge â itâs the complexity,â said Matt Brittin, president of Google EMEA, in an accompanying blog post. âMoving from the regulatory-first approach can help to unlock the opportunity of AI.â
But Google, Meta, and the other tech giants do stand to suffer financially if the rules prevent them from launching products in the E.U., as the region represents a huge market with 448 million people. On the other hand, if they go ahead with launches but break the rules, they could face hefty fines of up to âŹ35 million or 7% of global turnover, in the case of the AI Act.
Europe is currently embroiled in multiple regulatory battles with major tech firms in the U.S., many of which have already led to substantial fines. In February, Meta declared it was prepared to escalate its concerns over what it saw as unfair regulation directly to the U.S. president.
U.S. President Donald Trump referred to the fines as âa form of taxationâ at the World Economic Forum in January. In a speech at Februaryâs Paris AI Action Summit, U.S. Vice President Vance disparaged Europeâs use of âexcessive regulationâ and said that the international approach should âfoster the creation of AI technology rather than strangle it.â
Read the full article here