The scramble to build ever-larger AI campuses is fueling both legitimate community concerns and increasingly bizarre online fears
 ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­  
Thursday, May 7, 2026
Anthropic’s SpaceX compute deal comes as AI data center backlash grows—fueled by both real grievances and conspiracy theories

Welcome to Eye on AI, with AI reporter Sharon Goldman. In this edition: U.S. and China pursue AI guardrails to handle race to build powerful AI systems…How a congressional primary became a proxy battle over AI…The reggae band in a nightmare battle against AI slop remixes. 

I’ve spent a good deal of my time at Fortune focused on the mega AI data center boom—the new era of sprawling, gigawatt-scale AI campuses humming with racks of chips powering frontier models. That shift came sharply into focus with projects like Elon Musk’s Colossus supercomputer in Memphis, Tennessee, which broke ground in 2024.

This week, three strands of this sprawling and deeply physical AI infrastructure story converged for me in a way I hadn’t seen before.

Anthropic’s growing compute hunger
First, there was Anthropic’s deal with Elon Musk’s newly renamed AI company, SpaceXAI — formerly xAI before its acquisition by SpaceX—to secure more computing power from, you guessed it, the Colossus supercomputer in Memphis. The massive facility reportedly houses more than 220,000 Nvidia GPUs. Anthropic said the added compute would help expand capacity for Claude Pro and Claude Max subscribers who have been complaining about usage limits and availability.

AI executives often talk about future models as if they will become utilities—something like electricity or water flowing from a tap, instantly available whenever people need more intelligence. But none of that works unless the companies building those systems can first access the real faucet underneath it all: electricity powering vast clusters of AI chips.

That helps explain why Anthropic, a company known for emphasizing AI safety, is now turning to Elon Musk—who had previously called Anthropic a company that “hates Western civilization”—to quench its growing thirst for computing power.

Data center backlash is growing
It also leads to the second thread in this story: As Big Tech feeds its growing hunger for computing power by building ever-larger AI data centers—often in rural areas rich in land and high-voltage transmission access—backlash against these behemoths has started spilling into the broader public consciousness.

That became clear yesterday after I published the third story in my series on communities affected by the AI data center boom. In March, I traveled to Saline Township, Michigan, an agricultural community outside Ann Arbor, where residents fought plans for a giant OpenAI-Oracle data center. The town board ultimately voted against the project. But in a dramatic legal twist, the developer sued, and within weeks the township settled the case, allowing construction to begin less than two months after the original vote.

It’s a complex (and long) story, but the article quickly drew hundreds of thousands of views. I suspect that’s because it tapped into something much larger than a local zoning fight: a growing feeling among many Americans that the AI boom is becoming physical, visible, and political. Concerns are surfacing around transparency, land use, electricity demand, water consumption, environmental strain, and whether local communities have any real power to push back.

I’ve now visited five communities grappling with mega AI data center projects in Texas, Arizona, Louisiana, and Michigan. And whether residents ultimately support or oppose these developments, there’s little doubt they have brought disruption, anxiety, and profound questions about what happens when Big Tech comes to town.

And increasingly, those legitimate anxieties are colliding with something stranger online.

A disturbing rise in conspiracy theories
Threaded through both the AI data center development trend and community pushback has been a newer phenomenon I hadn’t fully appreciated until now: a disturbing rise in conspiracy theories about the data centers themselves.

On some Facebook groups against data centers that I follow, these strange conspiracy theories have begun to drown out the real grievances communities are trying to publicize.

There are posts calling AI data centers “surveillance centers,” “military bases,” “killing machines,” and tools for “population control.” Others accuse officials of placing data centers on farmland so local residents will lose the ability to grow food. I came across one especially bizarre claim alleging Nvidia was secretly installing “mini AI data centers” outside new homes in order to eventually “implant” people.

Even Robert F. Kennedy Jr. has weighed in, tying data centers to his longstanding concerns about electromagnetic radiation—claims that mainstream scientific bodies say remain unproven.

Conspiracy theories thrive where trust breaks down
These conspiracy theories seem to be spreading because of a trust vacuum that in many communities is already frayed by opaque planning processes, confusing technical language, aggressive timelines, and a broader feeling that decisions are being made far away by companies and officials who assume the public will simply adapt.

The AI industry may view these facilities as the critical infrastructure of the future. But unless companies and policymakers get much better at explaining that future—and involving communities in shaping it—the backlash surrounding AI data centers is likely to keep growing.

With that, here’s more AI news.

Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman
FORTUNE ON AI
AI IN THE NEWS
US and China pursue AI guardrails to handle race to build powerful AI systems. The Trump administration and Beijing are weighing a formal new U.S.-China AI dialogue ahead of a possible Trump-Xi summit next week in Beijing, reflecting growing concern on both sides that the race to build ever more powerful AI systems could create strategic risks neither country can manage alone. According to the Wall Street Journal, the talks could cover frontier-model safety, autonomous weapons, misuse of open-source AI tools, and even future crisis-management mechanisms such as an AI hotline between the two powers. The effort would revive a limited AI dialogue first launched under the Biden administration in 2023, when the U.S. and China agreed that humans—not AI—should retain control over nuclear launch decisions. Analysts say the renewed push underscores how AI is increasingly being viewed through a Cold War-style framework of “strategic stability," with fierce competition paired with attempts to prevent catastrophic escalation.

How a congressional primary became a proxy battle over AI. As a native New Yorker, I was fascinated by a new story in The New Yorker story examining how a Manhattan congressional primary has turned into a proxy war over AI regulation and industry influence. I have reported before on New York Assembly member Alex Bores, whose campaign has drawn attention—and money—from rival corners of the AI world, with OpenAI-linked interests opposing him and Anthropic-aligned groups backing him. But as the primary looms, the story also shows how quickly AI has evolved from a niche tech policy issue into something shaping local, state and national politics. 

The reggae band in a nightmare battle against AI slop remixes. This Wired story digs into how the California reggae band Stick Figure unexpectedly found itself at the center of the AI music debate this week after its seven-year-old song “Angels Above Me” suddenly exploded on TikTok and streaming platforms—largely because of unauthorized AI-generated remixes that the band says were created “in one click” and generated millions of plays without compensation. The episode highlights the growing chaos AI-generated music is creating for artists and streaming platforms alike, as tools for remixing songs proliferate and companies struggle to distinguish legitimate remixes from AI “slop” designed to siphon royalties. According to Deezer, AI-generated music uploads on its platform have surged from 18% of daily uploads in 2025 to 44% in 2026, with the company estimating that most are fraudulent. The situation is increasingly forcing platforms, labels, and artists to confront difficult questions around copyright, attribution, royalties, and whether the music industry’s existing systems can keep pace with generative AI.
EYE ON AI NUMBERS
60%
That's how many organizations now report measurable ROI from AI, according to new findings from a Dun & Bradstreet survey of 10,000 global businesses. However, the study also found that only 5% say their data is fully ready for AI. Data and infrastructure gaps remain the primary constraint on scaling AI ROI, with 50% citing access limitations, 40% expressing quality issues, and 38% reporting integration challenges. 
<