Legislation that mandates safety testing of artificial intelligence technologies is at risk of being pushed aside by the U.K. government, the head of the tech select committee says. Labourâs Chi Onwurah warned that the delay may reflect political efforts to align more closely with the United States, particularly the Trump campâs outspoken opposition to AI regulation.
One key focus of the AI Safety Bill is to legally mandate that companies uphold their voluntary agreements to submit frontier AI models for government safety evaluations before deployment. Nine companies, including OpenAI, Google DeepMind, and Anthropic, made such agreements with a number of international governments in November 2023.
SEE: UK Report Shows AI is Advancing at Breakneck Speed
In November 2024, technology secretary Peter Kyle said he would implement the legislation in the next year. At the time, Chi Onwurah, the Labour chair of the Science, Innovation and Technology Select Committee which is in charge of examining tech policy, was under the impression it was âcoming soon,â she told The Guardian, but now sheâs worried about whether that is really the case.
Political influences and transatlantic ties
âThe committee has raised with Patrick Vallance [the science minister] the lack of an AI safety bill, and whether that is in response to the significant criticism of Europeâs approach to AI, which J.D. Vance and Elon Musk have made,â she added.
In a speech at Februaryâs Paris AI Action Summit, U.S. Vice President Vance disparaged Europeâs use of âexcessive regulationâ and said that the international approach should âfoster the creation of AI technology rather than strangle it.â
Europe has solidified a pro-regulation reputation through the AI Act and numerous ongoing regulatory battles with major tech companies â resulting in hefty fines. It is no secret that Trump is not happy about this, referring to the fines as âa form of taxationâ at the World Economic Forum in January.
SEE: Meta to Take EU Regulation Concerns Directly to Trump, Says Global Affairs Chief
U.K. ministers do not plan to publish the AI Bill before the summer in an attempt to please the Trump administration, anonymous Labour sources told The Guardian last month. But this is not the only recent evidence that the country is trying to keep the States on side.
Safety vs. innovation â The UKâs strategic shift
Last month, the U.K.âs AI oversight body was renamed from the AI Safety Institute to the AI Security Institute, a rebranding seen by some as a shift away from a risk-averse stance and toward national interest farming. In January, Prime Minister Keir Starmer released the AI Opportunities Action Plan which put innovation front and centre and made little mention of AI safety. He also skipped the Paris AI Summit, where the U.K. declined to sign a global pledge for âinclusive and sustainableâ AI, as did the U.S.
The shift toward innovation-first policymaking comes with economic implicationsLimiting AI innovation in the U.K. could have a significant economic impact, with a Microsoft report finding that adding five years to the time it takes to roll out AI could cost over ÂŁ150 billion. Stricter regulations could also deter major tech firms like Google and Meta from scaling in the U.K., prompting concern from investors.
A spokesperson for the Department for Science, Innovation and Technology told The Guardian: âThe government is clear in its ambition to bring forward AI legislation which allows us to safely realise the enormous benefits and opportunities of the technology for years to come.â
âWe are continuing to refine our proposals which will incentivise innovation and investment to cement our position as one of the worldâs three leading AI powers, and will launch a public consultation in due course.â
Read the full article here