AI is hated by both political parties



If you want to know where America is on artificial intelligence, start with this: both Democrats and Republicans hate it. For the right, AI looks like a censorship machine. For the left, it looks like a corporate power grab. That anger is already showing up in new fights in Washington over who gets to regulate AI, but however those battles end, the message is clear: AI is now a political target.

You could see it in Tucker Carlson’s interview with OpenAI CEO Sam Altman. Carlson asked what many conservatives are asking: Will AI supercharge censorship? Will it put Big Tech and government in charge of “truth”? Beneath that is a fear that a handful of tech companies will quietly decide what your kids see, what you are allowed to say, and which viewpoints get pushed to the margins.

Progressives are furious too, just from a different angle. They look at AI and see power flowing from workers to boardrooms. They see insurance companies using software to decide who gets care, HR tools that make it easier to cut staff, and models built off artists’ work without consent or compensation. For them, AI is less about the culture war and more about a wallet-and-dignity war: who gets a job, who keeps a job, and whose work can be copied.

Here is the twist. For all the shouting, the two sides are closer than they think. Both parties are outraged by scams and deepfakes. Both hate political impersonations that could swing an election with a fake voice or video. Both worry about parents or grandparents getting a call from “a grandson” who sounds real and begs them to wire money. Even Altman now warns that voice authentication is essentially dead and that we are not ready for the fraud AI makes possible.

So the question is not whether AI should be regulated, but how.

Here is where we should start.

First, draw hard lines around impersonation. Using AI to fake someone’s voice or face in order to trick voters, seniors, or consumers should be clearly illegal, with serious penalties. Platforms should have to remove that content quickly and know who is buying political and commercial ads.

Second, require consent and payment for people’s likeness and work. If you want to train on a singer’s voice, a reporter’s archive, or an actor’s image, you should need a license, just as you would for any other valuable property.

Third, insist on plain-English transparency wherever AI helps make life-altering decisions, especially in health care, housing, and finance. If software helps decide whether you get a mortgage, chemotherapy, or a credit line increase, you should be told that it was used, what factors mattered, and how to appeal.

Fourth, protect the people who are easiest to hurt: kids and seniors. That means kid-safe default modes for chatbots, automatic scam warnings on suspicious calls and texts, “trusted contact” callbacks at banks before large transfers from older customers, and fast responses to sextortion and online blackmail that use AI tools to terrorize teenagers.

None of this requires Congress to solve big philosophical questions about intelligence. It requires something much more basic: political will. Stop the fakery. Stop the theft. Stop the scams. Put people back in charge.

At Met Council, the largest Jewish charity fighting poverty in America, we see how powerful AI can be when it is used for people, not against them. We use AI to cut paperwork, speed food deliveries to homebound seniors, and protect client privacy. We are not using AI as an excuse to cut staff. We are paying our team to learn these tools, giving them training and time so they can use AI to serve more people better than ever.

If a nonprofit can do that, big companies can do it too. Governments can do it. Schools can do it. Make AI the tool your workers use, not the weapon you use on your workers.

If the AI industry wants to avoid a political revolt, it has to meet Americans where they are. Show that AI serves people, not the other way around. Prove that consent, safety, and fairness are built into the business model, not tacked on at the end. Deliver real protections against impersonation, theft, and fraud, or risk a bipartisan coalition that finally agrees on one thing, which is to slow AI down.

Greenfield is the CEO of the Met Council.



Source link

Related Posts