Business

From Indiana to Idaho, a backlash against AI gathers momentum

SAN FRANCISCO -- When Michael Grayston, an evangelical pastor in Austin, Texas, heard that a friend’s relationship with an artificial intelligence companion had nearly destroyed a marriage, he saw a moral danger that needed to be addressed.

When Jack Gardner, a Boise, Idaho, musician, discovered AI had made songs with copyrighted music, he and his wife, Cathryn, an elementary school band teacher, started a local group to call for AI legislation.

And when Bart and Amy Snyder, farmers in Wolcott, Indiana, learned that a data center was going to be built 300 yards from their home, they worried it would drain local aquifers and started a campaign to unseat three county officials who had supported it.

Although none of them had been politically active before, they became part of a growing national movement that pits the tech industry and its billionaires against a diverse coalition of parent groups, religious leaders, environmentalists and former Tea Party activists. Politically they range from populist firebrand Steve Bannon to Bernie Sanders, the progressive senator from Vermont.

The reasons they are pushing back against the technology are as varied as their backgrounds. But they all worry that tech companies are more focused on cashing in on AI than on how it may affect regular people. They also share a sense that all that money will flow into the hands of Silicon Valley’s ultrawealthy, while the middle and working classes shoulder the costs.

Many of these AI critics say they are far from being Luddites just having a bad reaction to new, scary technology. They believe that people in Washington, especially President Donald Trump, are protecting Silicon Valley rather than reeling it in. They want regulation -- or at least a debate -- before AI becomes entrenched in American life.

“Given AI and robotics are going to impact every man, woman and child in this country, one might think that there’d be a massive debate in the United States Congress: What does it mean? Where do we go? How do we deal with it?” Sanders said in an interview with The New York Times. “There has been minimal, minimal discussion.”

A White House spokesperson, Davis Ingle, said in a statement that “it is the policy of the Trump administration to sustain American A.I. dominance to protect our national security and ensure we remain the world’s leading economy.”

The White House’s policy framework for AI, which was issued in March, calls on AI services to protect children. This year, Trump also issued a proclamation that said tech companies “must pay for the full cost of the energy and infrastructure needed to build and operate data centers.”

When OpenAI released ChatGPT in 2022, the chatbot became the fastest-growing software product ever, with 100 million using it in just two months. It didn’t take long for the industry to bet its future on the new AI technology, spending hundreds of billions of dollars to build the massive data centers they need to develop the technology, which are now popping up around the world.

Even in the early days of the AI boom, industry leaders like Elon Musk, OpenAI’s Sam Altman and Anthropic’s Dario Amodei frequently warned that AI was a risk to jobs and could have unforeseen, even dangerous consequences.

“If this technology goes wrong, it can go quite wrong,” Altman told lawmakers in 2023.

The public may have taken those warnings to heart. In a recent Quinnipiac University poll of American adults, 55% said they saw AI as a force for harm rather than good -- a surprisingly negative reaction to a technology that has become a driver of the economy.

Bannon has said the negativity reflects concerns about how the technology has been introduced. “There’s not clarity, there’s not transparency, and there’s certainly not accountability,” he said on a podcast, “The Last Invention,” in January. “That’s why you’ve seen not just interest but building anger of working-class people.”

People new to this movement are finding a number of already established organizations with ties to effective altruism, a philosophy that, among other things, is concerned about the safety of AI. Dustin Moskovitz, a Facebook co-founder, and Pierre Omidyar, founder of eBay, have been funding some of these groups.

AI’s reputation with the public hasn’t been helped by the social media era that preceded it. Social media, despite its wild popularity, has been criticized for heightening political polarization and worsening mental health.

In March, Meta and YouTube, which is owned by Google, were found responsible by a jury in Los Angeles for creating an addictive product that harmed a young user. The two companies, which together make more than $50 billion in profit each quarter, were fined $6 million. A jury in a separate trial in New Mexico ordered Meta to pay $375 million in damages for failing to protect young users from sexual predators.

Job cuts in the tech industry are fueling the perception that Silicon Valley is gutting its own workforce with AI before turning it on the rest of the economy. Just last week, Meta said it was cutting 10% of its workers, while Microsoft targeted up to 7% of its veteran employees in the United States with buyout offers. Nationwide, tech jobs declined by about 150,000 from 2022 through 2025, according to data from the Census Bureau.

Amy Kremer, a former Tea Party leader, recently became chair of Humans First, a conservative anti-AI group. It was spun out of the Center for AI Safety, which has had effective altruism ties. She said the “monster of social media” and the lack of regulation had inspired her to get involved.

“This is the battle of our lifetime,” she said.

(The Times has sued OpenAI and Microsoft, claiming copyright infringement of news content related to AI systems. The two companies have denied the suit’s claims.)

Tech leaders are keenly aware of the backlash. The risks were driven home last month when a man opposed to AI threw a Molotov cocktail at the front gate of Altman’s San Francisco compound.

Not all tech executives have warned that AI could be dangerous while they build out their AI empires. Jensen Huang, CEO of Nvidia, the AI chipmaker and the world’s most valuable publicly traded company, has consistently emphasized the opportunities of AI. Huang says AI will help people do their jobs better, not replace them.

“More jobs will be created,” he said in January. “Living will be more affordable.”

So far, the industry’s most notable response has been to pour hundreds of millions of dollars into super political action committees targeting politicians questioning AI. The industry has also downplayed the backlash as a product of paranoia peddled by so-called AI doomers, who worry the technology could destroy humanity, and NIMBYs, or not-in-my-backyard activists.

But those labels, common in Silicon Valley, are foreign to many of the people pushing back against AI. “I’ve been called a lot of things over the years working on issues, but doomers is a new one,” said Sandy Bahr, director of the Sierra Club’s Grand Canyon Chapter.

After hearing about a marriage damaged by an AI companion, Grayston, the pastor in Austin, hosted an hourlong discussion at LifeFamily Church with the leader of a local nonprofit dedicated to AI education, the Alliance for Secure AI Action, which receives donations from some individuals with ties to effective altruism.

Persuaded that there was a dark side to the technology, Grayston, 42, has since spoken about its dangers at other churches, written an opinion piece for a religious news site run by the conservative outlet RealClearPolitics and helped draft educational materials about AI for other faith leaders.

“I’m not advocating for the absolution of AI,” Grayston said. “I want common-sense regulation.”

The Snyders of Wolcott didn’t know what a data center was, they said, until they discovered that one had been approved in their backyard. After learning that the proposed facility would use more than 4 million gallons of water a day, Bart Snyder, 59, worried that it would turn the backyard pond where he fishes for largemouth bass into a crater. He sued to halt the project and funded a campaign to unseat the three officials who supported it.

The Snyders, who are self-proclaimed “hard-core Republicans” and support Trump, said the best thing that had come from the process was the relationships they had built with people of different political stripes.

“Now, I don’t care what affiliation you are,” Bart Snyder said. “If you’re against data centers, we’ll join forces.”

In the Boise music venues where Gardner’s rock band, Animus Gem, plays, the 30-year-old bassist is among many artists, musicians and writers troubled by how the technology, which was trained using copyrighted material, can instantaneously create songs, images and books.

He and his wife, Cathryn, started a local affiliate of PauseAI, a U.S. nonprofit that seeks to halt AI development, which has some funding from effective altruists. They made Boise one of 30 active groups in cities that the organization expanded to last year, up from five in 2025. They now have 10 volunteers and 500 signatures on a petition to slow AI.

“It’s really felt like exponential growth,” Cathryn Gardner said. “The artistic community in Boise has been really passionate about it.”

This article originally appeared in The New York Times.

Copyright 2026 The New York Times Company

Get unlimited digital access
#ReadLocal

Try 1 month for $1

CLAIM OFFER