State AI laws = economic, legal & security risks



Artificial intelligence is driving what amounts to a new industrial revolution. But instead of waiting to see how the technology matures, states are rushing to regulate it — risking a confusing patchwork of rules before a truly national (let alone global) AI market can even form.

In California, Gov. Gavin Newsom recently signed SB 53, for now, at least, imposing transparency requirements but refraining from imposing heavy-handed, direct obligations on the design of AI models. In June, the New York Legislature passed the RAISE Act, leaving Gov. Hochul with a major decision on her desk.

Unlike SB 53, the RAISE Act would prohibit AI model deployment unless it meets certain government standards and would hold developers liable for harms caused by third parties — what one analyst has rightly characterized as “an unmanageable burden.”

Aside from each bill’s substantive issues, what matters most is that they aren’t uniform — and they aren’t alone. State legislatures have introduced more than 1,100 AI-related bills — including 147 in New York, more than anywhere else. Virtually no two bills are the same. This wave of legislation threatens to overlap, contradict, and create compliance nightmares at the very moment AI is poised to deliver broad economic benefits.

Analysts expect AI to add nearly $7 trillion to global GDP in the next decade, with those benefits spread far beyond Silicon Valley. Cities like Rochester, Buffalo, and Syracuse are positioning themselves as AI hubs, competing for investment and talent. And New York City already ranks as the world’s second-largest startup ecosystem, just behind Silicon Valley. What happens in these regions will play a major role in determining whether the United States leads or lags in the global AI race with China.

Because AI systems can’t be neatly tailored state by state, one restrictive law in Albany or Sacramento can have extraterritorial effects and end up dictating terms nationwide. That creates constitutional concerns under the Commerce Clause, which gives Congress, not the states, the authority to regulate interstate commerce. It also creates serious economic challenges for innovators working in a borderless digital economy.

Hochul recently acknowledged this concern, and she’s right to worry that a maze of conflicting state laws isn’t “a model for inspiring innovation.” Importantly, such an approach would be particularly damaging to smaller firms: The very companies driving new applications — especially those outside the big coastal hubs — would be the hardest hit.

Big companies with large legal teams might absorb the costs of complying with 50 different state regimes, but startups will be smothered before they have a chance to grow. That undermines competition and tilts the market toward the already-entrenched.

We’ve seen this play out before. A thicket of state privacy laws, adopted without a federal framework, have created overlapping rules that researchers estimate will cost U.S. businesses $1 trillion over the next decade — much of it borne by companies with no presence in the states writing the laws.

AI is even more complex than privacy, and the risks of fragmentation are even greater. This won’t just mean more paperwork –– both developers and consumers will lose out as useful innovations will never reach the market at all.

AI and quantum computing will define national security in the coming years. China is pouring resources into these technologies under a comprehensive national plan with the goal of global dominance. If the U.S. instead fractures into 50 different regulatory approaches, American innovators will face barriers their competitors abroad never see.

That doesn’t mean legislatures should do nothing. But instead of imposing unmanageable cross-border obligations on AI model design, states should focus on harmful in-state uses of AI like fraud or discrimination. Protect consumers from actual harm without undermining innovation and competitiveness.

Most of all, we need a national framework that creates certainty for innovators and promotes AI investment, while providing guidance for states to address genuine risks. Some have even suggested pausing new state laws until Congress acts. That’s an idea worth taking seriously.

We’re at the very beginning of AI. The technologies being built today will shape how we work, learn, and live for decades. With Silicon Valley, New York City, and rising tech regions across the country, the U.S. has a once-in-a-generation chance to lead. But that depends on coherent national policy — not a patchwork of state laws that could hand the advantage to others.

Stout is director of innovation policy at the International Center for Law & Economics. Manne is ICL&E’s president and founder, and a distinguished fellow at Northwestern University’s Center on Law, Business, and Economics, and a visiting professor of law at IE University, Madrid.



Source link

Related Posts