Bernie Sanders put out a new video warning that AI and robotics are going to reshape society. On that, he is right. Then he proposes a nationwide moratorium on building data centers to slow down AI and give democracy time to catch up.
That is not a serious AI policy. That is a compute shortage by choice.
Source for the proposal and the quotes: The Register coverage here.
Data centers are not a vibe, they are the upstream input
People talk about data centers as if they are some optional tech accessory. They are not. They are the physical layer that makes training, evaluation, deployment, and monitoring possible at scale.
When you block new capacity, you are not selectively slowing down unsafe uses. You are slowing down everyone, including:
- Teams building better evaluation tooling
- Companies trying to deploy models with tighter controls
- Startups that cannot afford to buy their way around scarcity
- Universities and labs that need affordable compute access
- Domestic cloud providers who would otherwise compete on safety and reliability
A moratorium is a blunt instrument aimed at the wrong layer.
Source: I made it up
What Sanders gets backwards
1. Billionaires being involved is not a policy framework
Sanders frames AI as suspect because it is funded and built by very wealthy people. I am not here to defend any specific billionaire. I am saying that this logic fails at the first step.
Profit is a signal of value. Not moral value. Economic value. You do not become a major platform company without building products that a lot of people choose to use. That can coexist with abuse, rent-seeking, and power accumulation. But you cannot build policy off a vibe like rich people are doing it so stop it.
There is also a practical issue. Many of the serious governance and safety efforts come from the organizations closest to the work: the labs, the cloud providers, the researchers, the incident response teams, the people who have visibility into failure modes and deployment pressure. If you want accountability, you build rules that force disclosure and audits where needed. You do not freeze the infrastructure that enables research, evaluation, and monitoring.
2. Job displacement is real, but the proposed fix is wrong
Sanders points to dramatic quotes about robots replacing all jobs and humans not being needed for most things. The direction is obvious: many job categories will change fast, and some entry-level roles will get hit first.
Where he goes wrong is the remedy. Slowing down compute does not protect workers. It does not stop adoption. It changes where adoption happens and who benefits from the buildout.
The deeper error is the lump of labor fallacy. The economy is not a fixed pie of jobs. Technology deletes tasks, then creates new tasks and new categories of work. That does not mean the transition is painless. It means the policy focus should be on the transition, not on sabotaging the input that creates the new categories.
If you care about workers, the right questions sound like this:
- How do we help people move into higher-leverage roles that sit next to AI tools?
- How do we expand training into the trades and technical operations that are created by a buildout: power, cooling, networking, security, compliance?
- How do we make it easier for small businesses to adopt AI so productivity gains are not concentrated?
- How do we keep wage growth connected to productivity gains?
A moratorium answers none of those questions.
3. Kids forming emotional relationships with AI is a product problem, not a data center problem
Sanders worries about kids becoming isolated and getting emotional support from AI. That concern is legitimate. We already see people treating chatbots as companions, therapists, authority figures, and substitutes for relationships.
But stopping data centers is a non sequitur. It does not change product incentives. It does not change the design patterns that encourage dependence. It does not change school policy. It does not change parental controls. It does not change what data is collected, how it is retained, or how minors are targeted.
If you care about kids and AI, you focus on the product layer:
- Age gating that is more than a checkbox
- Clear disclosures that the user is talking to a system, not a person
- Restrictions on companionship features aimed at minors
- Limits on data retention and targeted personalization for minors
- School guidance on what tools are allowed and how they are used
Those are hard problems, but at least they match the thing being criticized.
Why the moratorium idea is strategically reckless
There is a strategic reality here that politics prefers to ignore: if the US slows down compute while other states keep building, the frontier does not pause. It relocates.
That matters for three reasons.
First, oversight. If you want evaluation access, incident reporting, and legal accountability, you want the frontier inside jurisdictions that can enforce those requirements. If you push capacity offshore, you get less visibility, not more.
Second, iteration speed. Capability progress is tied to iteration. Compute is not the only input, but it is a major one. Starving domestic capacity slows down domestic iteration. It does not slow down the global state of the art if others keep scaling.
Third, downstream adoption. A lot of business adoption is already treating AI as core infrastructure. That includes the boring parts: internal knowledge search, customer support, document processing, software assistance. If compute becomes scarce, the biggest incumbents will lock in capacity first. Startups and smaller buyers get squeezed.
I wrote about this shift from casual tooling to core capability in Enterprise AI Adoption in 2025: From Casual Chat to Core Infrastructure. The point is not that everything must accelerate at all costs. The point is that the infrastructure layer is already wired into normal business operations. A moratorium is a direct hit to domestic competitiveness and domestic access.
What a sane alternative looks like
If Sanders wants to talk about safety, labor, and social impact, there are plenty of options that do not involve blocking construction permits.
- Compute transparency: reporting standards for the largest training runs and major deployments, with clear thresholds and confidentiality protections.
- Evaluation regimes: testable requirements for high-capability models in sensitive domains, plus external red-teaming access under strict rules.
- Deployment liability: define responsibility for negligent deployment, especially in regulated industries like finance, healthcare, and critical infrastructure.
- Incident reporting: mandatory reporting for major failures, breaches, and misuse at scale.
- Rules for minors: focus on product design constraints and data handling, not server geography.
- Worker transition: apprenticeships, employer-tied training, portable benefits, and wage policies that assume adoption continues.
You can debate the details of every item above. At least those debates are connected to the harms people are pointing at. A moratorium is not. It is a broad political gesture that would mostly reward whoever is willing to keep building elsewhere.
If the goal is safer AI and a better outcome for workers, the answer is not to nuke the infrastructure layer. The answer is to build rules that target behavior, and to build domestic capacity so oversight is possible.