A ban on state AI laws could smash Big Tech’s legal guardrails

News Room

Senate Commerce Republicans have kept a ten year moratorium on state AI laws in their latest version of President Donald Trump’s massive budget package. And a growing number of lawmakers and civil society groups warn that its broad language could put consumer protections on the chopping block.

Republicans who support the provision, which the House cleared as part of its “One Big Beautiful Bill Act,” say it will help ensure AI companies aren’t bogged down by a complicated patchwork of regulations. But opponents warn that should it survive a vote and a congressional rule that might prohibit it, Big Tech companies could be exempted from state legal guardrails for years to come, without any promise of federal standards to take their place.

“What this moratorium does is prevent every state in the country from having basic regulations to protect workers and to protect consumers,” Rep. Ro Khanna (D-CA), whose district includes Silicon Valley, tells The Verge in an interview. He warns that as written, the language included in the House-passed budget reconciliation package could restrict state laws that attempt to regulate social media companies, prevent algorithmic rent discrimination, or limit AI deepfakes that could mislead consumers and voters. “It would basically give a free rein to corporations to develop AI in any way they wanted, and to develop automatic decision making without protecting consumers, workers, and kids.”

“One thing that is pretty certain … is that it goes further than AI”

The bounds of what the moratorium could cover are unclear — and opponents say that’s the point. “The ban’s language on automated decision making is so broad that we really can’t be 100 percent certain which state laws it could touch,” says Jonathan Walter, senior policy advisor at the Leadership Conference on Civil and Human Rights. “But one thing that is pretty certain, and feels like there is at least some consensus on, is that it goes further than AI.”

That could include accuracy standards and independent testing required for facial recognition models in states like Colorado and Washington, he says, as well as aspects of broad data privacy bills across several states. An analysis by nonprofit AI advocacy group Americans for Responsible Innovation (ARI) found that a social media-focused law like New York’s “Stop Addictive Feeds Exploitation for Kids Act” could be unintentionally voided by the provision. Center for Democracy and Technology state engagement director Travis Hall says in a statement that the House text would block “basic consumer protection laws from applying to AI systems.” Even state governments’ restrictions on their own use of AI could be blocked.

The new Senate language adds its own set of wrinkles. The provision is no longer a straightforward ban, but it conditions state broadband infrastructure funds on adhering to the familiar 10-year moratorium. Unlike the House version, the Senate version would also cover criminal state laws.

Supporters of the AI moratorium argue it wouldn’t apply to as many laws as critics claim, but Public Citizen Big Tech accountability advocate J.B. Branch says that “any Big Tech attorney who’s worth their salt is going to make the argument that it does apply, that that’s the way that it was intended to be written.”

Khanna says that some of his colleagues may not have fully realized the rule’s scope. “I don’t think they have thought through how broad the moratorium is and how much it would hamper the ability to protect consumers, kids, against automation,” he says. In the days since it passed through the House, even Rep. Marjorie Taylor Greene (R-GA), a staunch Trump ally, said she would have voted against the OBBB had she realized the AI moratorium was included in the massive package of text.

California’s SB 1047 is the poster child for what industry players dub overzealous state legislation. The bill, which intended to place safety guardrails on large AI models, was vetoed by Democratic Governor Gavin Newsom following an intense pressure campaign by OpenAI and others. Companies like OpenAI, whose CEO Sam Altman once advocated for industry regulation, have more recently focused on clearing away rules that they say could stop them from competing with China in the AI race.

“What you’re really doing with this moratorium is creating the Wild West”

Khanna concedes that there are “some poorly-crafted state regulations” and making sure the US stays ahead of China in the AI race should be a priority. “But the approach to that should be that we craft good federal regulation,” he says. With the pace and unpredictability of AI innovation, Branch says, “to handcuff the states from trying to protect their citizens” without being able to anticipate future harms, “it’s just reckless.” And if no state legislation is guaranteed for a decade, Khanna says, Congress faces little pressure to pass its own laws. “What you’re really doing with this moratorium is creating the Wild West,” he says.

Before the Senate Commerce text was released, dozens of Khanna’s California Democratic colleagues in the House, led by Rep. Doris Matsui (D-CA), signed a letter to Senate leaders urging them to remove the AI provision — saying it “exposes Americans to a growing list of harms as AI technologies are adopted across sectors from healthcare to education, housing, and transportation.” They warn that the sweeping definition of AI “arguably covers any computer processing.”

Over 250 state lawmakers representing every state also urge Congress to drop the provision. ”As AI technology develops at a rapid pace, state and local governments are more nimble in their response than Congress and federal agencies,” they write. “Legislation that cuts off this democratic dialogue at the state level would freeze policy innovation in developing the best practices for AI governance at a time when experimentation is vital.”

Khanna warns that missing the boat on AI regulation could have even higher stakes than other internet policies like net neutrality. “It’s not just going to impact the structure of the internet,” he says. “It’s going to impact people’s jobs. It’s going to impact the role algorithms can play in social media. It’s going to impact every part of our lives, and it’s going to allow a few people [who] control AI to profit, without accountability to the public good, to the American public.”

Read the full article here

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *