Artificial intelligence has become the most polarizing political issue in America, with only 26% of voters viewing it positively according to a new NBC News poll of 1,000 voters. That makes AI less popular than ICE, setting the stage for what industry CEOs fear could become a "ban AI" movement by the 2028 elections.
The dramatic shift in public sentiment comes as the Trump administration wages an unprecedented legal battle with AI company Anthropic, escalating to new court filings this week that expose deep fractures between Silicon Valley and Washington over national security priorities.
Pentagon Declares AI Company a Security Risk
On March 20, TechCrunch reported fresh developments in the explosive legal dispute where Defense Secretary Pete Hegseth has officially labeled Anthropic's operations a "national security risk." The blacklisting stems from Anthropic's refusal to grant the Pentagon unfettered military access to its Claude AI system for surveillance operations and autonomous weapons development.
According to sworn declarations filed by Anthropic, the Pentagon claimed both sides were "nearly aligned" on key issues just before President Trump declared the negotiations "kaput." The stark contrast with OpenAI, which secured a Pentagon partnership, highlights the growing divide in Silicon Valley over military cooperation.
Defense Secretary Hegseth had given Anthropic CEO Dario Amodei until February 28 to comply or face penalties. The company's continued resistance has now triggered the most serious government confrontation with an AI firm to date.
Federal Framework Aims to Override State Regulations
While battling Anthropic, the White House simultaneously launched a coordinated effort to strip states of AI regulatory power. White House AI advisor David Sacks announced federal legislation designed to preempt existing rules in California, Colorado, Utah, and Texas.
"This was in response to a growing patchwork of 50 different state regulatory regimes that threaten to stifle innovation and jeopardize America's lead in the AI race," Sacks stated. The federal framework focuses on protecting children, intellectual property rights, preventing censorship, and addressing power costs from data centers.
The move builds on Trump's December executive order that blocks conflicting state regulations, signaling the administration's intent to centralize AI governance under federal control.
Congressional Republicans Rally Behind Unified AI Strategy
Senator Marsha Blackburn of Tennessee released a revised federal AI plan on March 18 that incorporates her TRUMP AMERICA AI Act, Kids Online Safety Act, and NO FAKES Act. The comprehensive legislation would impose a "duty of care" on AI developers, enhance chatbot safety measures, and repeal Section 230 protections.
"Congress must respond to [Trump's] call for a unified federal framework for AI that safeguards children, creators, conservatives, and communities nationwide," Blackburn declared. Her plan aligns directly with White House efforts to create uniform national standards while overriding state-level initiatives.
The timing suggests coordinated Republican strategy to consolidate AI policy under federal oversight, potentially reshaping how campaigns and political organizations deploy AI tools for voter outreach and phone banking operations.
Industry Leaders Warn of Political Backlash
The plummeting public approval ratings have AI executives sounding alarm bells about potential political consequences. Palantir CEO Alex Karp warned on CNBC that AI disruption could significantly impact "highly educated, often female voters, who vote mostly Democrat," while potentially boosting working-class political power.
Karp directly tied AI development to national security concerns, stating: "If you decouple [AI] from the support of the military, you're going to have an enormous problem." His comments underscore how AI has become intertwined with broader political coalition dynamics.
OpenAI's Sam Altman acknowledged growing job displacement anxieties at the BlackRock Summit, while multiple AI CEOs privately expressed fears to Axios about facing a comprehensive "ban AI" political movement in upcoming election cycles.
Campaign Strategy Implications Emerge
For political consultants and campaign strategists, these developments signal a fundamental shift in how AI tools may be regulated and deployed. The federal preemption efforts could standardize rules for AI-powered phone banking and voter outreach across all 50 states, eliminating the current regulatory patchwork that complicates multi-state campaign operations.
However, the toxic polling numbers suggest candidates may need to distance themselves from visible AI use, even as they rely on these technologies behind the scenes for voter modeling and targeting. The political liability of being associated with unpopular AI systems could reshape campaign technology strategies heading into 2028.
The Anthropic-Pentagon standoff also raises questions about which AI systems campaigns can safely use without triggering national security scrutiny. As government agencies increase oversight of AI companies, political organizations may need to carefully evaluate their technology partners to avoid potential complications.
The convergence of national security concerns, regulatory uncertainty, and public skepticism has transformed AI from a campaign advantage into a potential political minefield that could define the next election cycle.