Conservative activist Robby Starbuck slapped Mark Zuckerberg’s Meta with a defamation lawsuit over false responses about him generated by its AI chatbot, including claims that he was a Holocaust denier and had participated in the Jan. 6 Capitol riot.
Starbuck – who is best known for leading campaigns against the likes of Walmart and Harley-Davidson over divisive DEI practices – is seeking more than $5 million in damages from the suit, which was filed in Delaware Superior Court on Tuesday and first obtained by the Wall Street Journal.
In a video posted on his X account announcing the lawsuit, Starbuck shared a clip in which Meta AI’s voice output claimed that he was “linked to extremist views, including anti-Semitism and Holocaust denial.” It added that he was involved in the Capitol riot by “filming and promoting the event.”
“With my lawsuit today, I intend to make them solve this problem once and for all,” Starbuck said. “We cannot allow one of the largest and most powerful companies in human history to make defamation a core feature of its business model.”
Meta did not respond specifically to Starbuck’s allegations.
“As part of our continuous effort to improve our models, we have already released updates and will continue to do so,” a Meta spokesperson said in a statement.
Starbuck, 36, told the Journal that he first learned of the situation last August after a Harley-Davidson dealer in Vermont posted a screenshot on X in which Meta’s AI claimed that Starbuck was at the Capitol riot and had links to the QAnon conspiracy theory.
Starbucks denied the chatbot’s allegations sand wrote online that day that Meta would “hear from my lawyers.” However, the false claims allegedly continued for months afterward.
The anti-woke activist told the newspaper that he is concerned that error-prone AI could eventually be used to determine factors like personal credit or insurance risk.
Starbuck, who says he was in Tennessee on Jan. 6, 2021, when the Capitol riot was taking place, added that his researchers have not been able to trace the chatbot’s claims to any specific information source.
“The damage to my reputation is obvious, but so is the potential risk to our elections and your reputation next,” Starbuck said in his X video.
Starbuck’s lawsuit states that shortly after he contacted the company last summer, his lawyers received a response from a Meta attorney who said the company was taking his allegations “seriously” and an investigation was underway.
The lawsuit alleges that Meta’s AI was making false claims about him as recently as this month.
The company’s AI appears to have now enacted limits on searches related to Starbuck, responding to direct questions with the answer: “Sorry, I can’t help you with the request right now.”
According to the Journal, no US court has awarded damages to someone who claimed to have been defamed by an AI chatbot.
Traditionally, social media platforms like Meta have enjoyed sweeping protection thanks to Section 230, a clause that protects them from being held liable for third-party content posted on their platforms.
However, it remains unclear if the same protections apply in the case of outputs generated by a tech company’s own AI products. Several lawmakers are pushing for a federal framework regulating AI, but no comprehensive legislation has yet passed.
Meanwhile, Meta, Google and other Big Tech giants active in the AI race have repeatedly said that their chatbots are prone to hallucinations, or spitting out false information.