
The battle between artificial intelligence companies has jumped from the tech world straight into American politics. Anthropic announced Thursday it will pour $20 million into races this midterm election season.
The money goes to Public First Action, a newly formed group that wants states to keep their power to write AI rules. That puts Anthropic on a collision course with both OpenAI’s political operation and the Trump White House, which wants Washington to take control of AI policy nationwide.
“The companies building AI have a responsibility to help ensure the technology serves the public good, not just their own interests,” Anthropic said in Thursday’s announcement.
The group is backing candidates who oppose efforts to strip states of their authority over AI technology. One early beneficiary is Marsha Blackburn, the Republican running for Tennessee governor, who fought against federal bills that would have blocked state legislatures from passing their own AI laws.
Public First Action faces steep odds against Leading the Future, the opposing group backed by OpenAI president Greg Brockman and tech investor Marc Andreessen. That operation has collected $125 million since launching in August 2025. Andreessen’s venture firm A16Z holds a stake in OpenAI, making the funding fight even more personal between the rival AI developers.
President Trump signed an order in December that directly threatens the state laws Anthropic wants to protect. The directive tells federal agencies to build a national AI framework with minimal rules, then use it to override tougher state regulations.
Trump’s order goes further by creating a Justice Department task force specifically designed to challenge state AI laws in court. States with rules Trump considers too strict could lose federal funding. His AI advisor, David Sacks, already singled out Colorado’s law as “probably the most excessive” one on the books.
Several states have regulations taking effect or moving through legislatures in 2026. Colorado delayed its AI Act until June 30, 2026, after facing pressure, but the law will still require companies building “high-risk” AI systems to prevent discrimination in their algorithms. California passed seven AI laws in 2025, with its Transparency in Frontier AI Act starting January 1, 2026. Texas banned AI use for certain purposes through its Responsible AI Governance Act.
Cryptopolitan previously reported that Anthropic raised $2 billion at a $60 billion valuation last year, followed by a massive $15 billion investment from Microsoft and Nvidia that pushed its worth to around $350 billion. Those investors now have billions riding on how AI gets regulated.
The company’s blog post Thursday took a veiled shot at OpenAI without naming them directly, warning that “vast resources have flowed to political organizations that oppose” efforts to make AI safer.
If candidates backed by Public First Action win enough seats, they could block federal preemption bills in Congress. That would keep the state-by-state approach alive, at least temporarily.
The rivalry between Anthropic and OpenAI runs much deeper than just funding levels. Founded by siblings Dario and Daniela Amodei after they left OpenAI over safety concerns, Anthropic has built its entire identity around making AI technology less risky. OpenAI and its backers prefer lighter rules that let innovation move faster.
That philosophical gap now plays out in campaign contributions and lobbying. OpenAI asked Trump to block state AI rules in exchange for government access to its models earlier this year. The company argued that fragmented state laws would damage America’s AI leadership.
But the odds look tough. Leading the Future’s six-to-one funding advantage gives OpenAI’s side more money to spend on ads, staff, and ground operations. Trump’s executive order also hands federal agencies tools to challenge state laws immediately, without waiting for Congress.
The fight reveals a deeper split in Silicon Valley over how much oversight AI should face. Companies like Anthropic, founded by former OpenAI employees who left over safety disagreements, generally favor stronger rules to prevent AI from causing harm. OpenAI and its supporters prefer lighter regulation that lets innovation move faster.
Voters in states that passed AI laws will essentially get to choose which vision they prefer when they cast ballots this fall. Their decision could determine whether AI development happens under a patchwork of state rules or a uniform federal system with fewer restrictions.
Get seen where it counts. Advertise in Cryptopolitan Research and reach crypto’s sharpest investors and builders.