A push in California to clamp down on AI giants has taken a bizarre twist – after ballot measures that appear to be aimed at Sam Altman’s OpenAI were filed by the stepbrother of an executive at archrival Anthropic, The Post has learned.
In December, a California resident named Alexander Oldham filed a pair of ballot measures that would empower the state to crack down on major AI firms, in part by putting a special focus on policing “public benefit corporations” – the corporate structure that OpenAI recently converted its for-profit arm into.
“I think these are all reasonably common sense measures to take but background wise I’m a nobody,” Oldham told Politico, adding that AI regulation was “just something I find interesting.”
As it turns out, Oldham is the stepbrother of Zoe Blumenfeld, who since 2024 has been head of internal communications of Anthropic – the fast-growing AI giant that has been squaring off against OpenAI for dominance in the sector, according to public records and social media accounts reviewed by The Post.
Oldham has strenuously denied that his familial relationship with Blumenfeld had any influence on his decision to propose the ballot measure. Anthropic has additionally denied that it or Zoe Blumenfeld has played any role in the ballot measures. Blumenfeld declined to comment.
“Anthropic has had no involvement in, coordination with, or knowledge of any ballot proposals filed by Alexander Oldham, and the company does not support either proposal,” a company spokesperson said in a statement.
Meanwhile, Oldham also has ties to tech entrepreneur Guy Ravine, best known for his bitter legal battle with OpenAI over who came up with the idea for the company, sources with knowledge of the situation said.
The ballot proposals, which received a title and summary from the California attorney general’s office last week, call for the creation of state-appointed bodies that would have power over AI companies – one of which would have the power to approve or reject actions by AI companies that restructure as public benefit corporations.
While OpenAI is not explicitly named, the measure is “clearly” targeted at Altman, according to Perry Metzger, chairman of Alliance for the Future, a Washington, DC-based AI policy group.
“This is the ‘Be Nasty To Sam Altman Because I Don’t Like You Act,’” Metzger told The Post. “Anyone reading this who knows anything about the players immediately knows that OpenAI is the company that the first one is aimed at.”
Anthropic is also a public benefit corporation and has been one since it was founded in 2021, while OpenAI was originally structured in 2015 as a nonprofit. Experts say Anthropic would likely have an easier time complying with a safety-focused commission than OpenAI, which only completed its restructuring in October and whose critics have long accused of prioritizing rapid innovation over humanity’s well-being.
In fact, Anthropic CEO Dario Amodei and his sister Daniela cofounded Anthropic after leaving OpenAI over concerns Altman was not focused enough on safety.
The filings sparked chatter in Silicon Valley – in large part because Oldham has not donated to any California political causes and does not appear to have previously worked on AI policy. Politico has described Oldham as a “mystery to many in the AI policy space.”
“Let me make this very clear: Neither Guy Ravine nor Zoe Blumenfeld are involved in this initiative,” Oldham told The Post in a written statement. “I haven’t been in touch with Guy Ravine in nearly a decade and I have not been in touch with Zoe in more than two years. This initiative was filed, created, and funded by me.”
In response to a detailed list of questions, Oldham said he had used AI chatbots to craft the ballot proposals and was not advised by lawyers. He also insisted that the measures were not intended to target a specific company.
“I spearheaded this initiative because I have been interested in AI safety since 2023,” he added. “I wanted to create a public document to spark a necessary debate on AI regulation and get the public thinking about these ideas.”
Oldham is listed as a resident of Point Richmond, Calif. Based on social media accounts, Oldham appears to have worked for years at Passage Nautical, a yacht chartering business founded and run by his mother, Deborah Reynolds – who was sentenced to a year in state prison and had to repay $1.3 million after pleading guilty to tax evasion in 2022.
A 2006 obituary in the San Francisco Chronicle for Michael Blumenfeld calls him the “beloved father of Zoe” and “beloved stepfather of Alexander.” In his will, Zoe is listed as one of his three biological children. Oldham’s mother, Deborah Reynolds, was Blumenfeld’s wife.
Ravine vehemently denied that he had colluded with Oldham in any way or had any foreknowledge about the ballot measures, a sentiment echoed by Oldham.
“I have had no involvement in his initiative,” Ravine said. “I have not been in contact with Alex Oldham in approximately 10 years. My only connection to him is that his mother was an investor in a company I was involved with over a decade ago – a tenuous link at best.” He also noted that he “not have the financial resources to fund ballot initiatives.”
The Post has not seen any evidence that Ravine was involved in the ballot initiative.
US District Judge Yvonne Rogers granted OpenAI a summary judgement in the case last July, ruling that Ravine had infringement on its trademark and even “copied” the company by launching a chatbot and image generator months after OpenAI did.
Buried in a little-noticed footnote in the ruling, Rogers references none other than Deborah Reynolds – Oldham’s mother. Reynolds, the judge wrote, was “Ravine’s ‘friend’ who hosted Ravine on more than 50 occasions and invested $450,000” into one of Ravine’s companies.
In fact, Ravine, Reynolds and Oldham were once part of the same social circles, a source close to the situation said. The Post obtained a photo of the three sitting together at what appeared to be a family party in April 2014.
“Neither Alex or I have seen Guy in decades,” Reynolds said in a text message. “I have not seen Guy in over a decade. I have talked to him since 2014 a few times. Zoe and Alex have no relationship. There’s no connection to his initiative.”
Oldham’s aren’t the first California AI ballot measures to spark controversy. OpenAI has formally accused a nonprofit called Coalition for AI Nonprofit Integrity (CANI), which publicly challenged its restructuring plans, of being a front for Elon Musk, who is currently suing OpenAI for abandoning its nonprofit mission.
CANI, which has faced questions about its funding, is backing a separate ballot proposal filed by Poornima Ramarao, the mother of an ex-OpenAI employee-turned-whistleblower who was ruled to have died by suicide. Ramarao’s proposal explicitly aims to reverse OpenAI’s restructuring.
In a 2023 lawsuit, OpenAI alleged that Ravine, who acquired the new-defunct domain name “open.ai” in March 2015, tried to register a trademark for “Open AI” a day after the company launched to “sow consumer confusion.”
It also submitted a 2022 email in which Ravine told Altman he would give up the website if the company donated millions of dollars to an “academic collaboration.”
Ravine countersued, making the sensational claim that Sam Altman had “hijacked” the idea that became OpenAI from him and stolen his “recipe” for the pursuit of advanced AI. His countersuit earned him a splashy Bloomberg profile titled “Why OpenAI is at war with an obscure idea man.”
One ballot measure introduced by Oldham, dubbed the “California Public Benefit AI Accountability Act,” would create an “independent body” of state officials to oversee public benefit corporations.
Oldham said he has “decided to abandon” that proposal because it was “not properly formulated” – though the state AG has already cleared it for circulation.
The second measure, titled the “California AI Worker Protection Act,” would similarly establish an “AI Safety Commission” that would set rules on AI firms to protect workers. It would have the power to impose penalties, conduct audits and even deny how far the “frontier AI” system can advance technologically.
If approved, the measure would empower the state to enforce “distributed decision-making authority” and “board independence” to prevent “dangerous concentration of control over AI capabilities” – the main allegation that OpenAI’s board levied at Altman prior to his infamous 2023 firing.
Oldham said the second initiative “is not going to get on the ballot because I don’t have the funding” to pursue a signature-gathering campaign.
“If someone wants to fund it, I would welcome it, but this was not the point,” Oldham said.
In California, ballot proposals need to secure the nearly 550,000 signatures to qualify for the ballot by June. Experts said such signature-gathering campaigns can cost tens of millions of dollars.
Meanwhile, some insiders were skeptical about Oldham’s explanation for his actions.
“That’s obviously bulls—t. You do not just casually file a ballot measure in California because it’s a topic you’re interested in,” a veteran California tech policy consultant who reviewed the ballot measures opined to The Post.
Both of Oldham’s ballot proposals are drafted in a way that would allow state-appointed regulators to single out specific companies rather than set industry-wide standards, according to a legal analysis conducted by CALinnovates.
“If you read the language of the measures, you have to squint, tilt your head, and look at this completely cockeyed, to not see how this is aimed at just about anyone other than OpenAI,” said Mike Montgomery, CEO of the tech policy advisory group CALinnovates.
Oldham said any assertion that his proposals were meant to target a specific company is “ridiculous and false.”