Resistance to AI — A Proportionate Response

For the past couple of years, I’ve been noticing something in my own work that I want to name plainly. The deeper I go into the question of human purpose in the AI era, the more I find AI itself has become my co-creator in asking it. Not a tool I pick up. A thinking partner I work with. Sounding board, sparring partner, second mind. The questions I’m asking about purpose are being shaped, sharpened, sometimes outright reframed in conversation with AI. I am, by any honest accounting, becoming more deeply engaged with AI the further I go into the most human question I know.

This would have surprised me a few years ago. It doesn’t anymore. And it’s pushed me toward a thought I want to put down clearly.

The phrase proportionate response is in the air right now, and not in an abstract way. War is being waged. Political violence is no longer a fringe concern. Ordinary people, watching the news, are thinking — sometimes for the first time — about what it means for a response to be measured, calibrated, proportional to what provoked it. The phrase has weight again.

That weight is worth borrowing.

Because there is another conversation, running alongside the harder ones, where the same concept applies and is almost never invoked correctly. The cultural posture toward AI has hardened, in many quarters, into something defensive — and increasingly into something more than posture. An incendiary device thrown at the gate of an AI executive’s home.¹ State legislatures across the country racing to pause or ban the infrastructure AI requires.² Federal lawmakers proposing nationwide moratoria.³ This is no longer mood. It is action. And it is being taken in the name of a measured, proportionate concern about what AI is doing to society.

The concerns driving the resistance are not imaginary. Job displacement and the economic dislocation that follows it. Anxiety about what happens to human meaning when machines can do what once required us. Water consumption at hyperscale data centers. Rising energy costs absorbed by ordinary ratepayers. The strain on local power grids in communities that did not choose to host the infrastructure of an industry they barely understand. These are real things, and pretending otherwise would be its own form of dismissal.

The trouble is that the underlying logic of the response is not actually proportionate. It only looks that way.

Consider where our cultural intuition about proportionate comes from. It rests, knowingly or not, on Newton — for every action, an equal and opposite reaction. That’s where the “equal and proportionate” quietly lives. AI displaces jobs, therefore push back against AI. The math feels right because the model feels right. Action, reaction, equilibrium restored.

But Newtonian cause-and-effect is not the right model for what is actually happening. Aristotle understood, and contemporary thinkers like John Vervaeke have been recovering the point, that motion is shaped not only by causes but by constraints — the geometry of what is possible at all. A force without an account of constraints tells you almost nothing. Scarcity, in this older and more accurate frame, is not a feature of the material world. It is a constraint pattern. It shapes the field of human possibility by what it forecloses, not by what it actively does.

AI is not a new force pushing against humans. It is, structurally, the removal of constraints that have historically suppressed human flourishing for the vast majority of people. Medicine becoming radically more accessible. Hunger becoming structurally solvable. Education that meets each child where they actually are. Diagnostics in places that have never had doctors. Tools for human creation handed, at near-zero cost, to people who would never otherwise have had them. These are not promises. They are unfolding, happening, now, in real time.

You do not respond proportionately to a wall coming down by pushing back against the empty space.

This is where the anti-AI posture fails its own test. Proportionality, in any honest ethical sense, means weighing the full stakes on both sides of an action — not just the visible losses, but what is foregone by inaction. What is the opportunity cost? When the foregone side of the ledger contains civilizational possibilities of that magnitude, a defensive response is not proportionate. It is disproportionate in the opposite direction. The reflex to slow, restrict, retreat — applied honestly to what is actually being weighed — fails its own ethical premise.

The proportionate response to AI is to engage it. Carefully, responsibly, and fully.

This is not techno-optimism. Techno-optimism waves away the hard parts. The displacement is real. The disorientation is real. The civilizational adjustment is real and uncomfortable and will not be short. There is a reason eyes adjusting to brighter light experience the brightness as pain before they experience it as illumination. None of that is to be dismissed.

But the question is what to do with the discomfort. And here the proportionate-response frame, properly understood, has more to offer than the resistance admits. Because once you accept that the stakes on the other side are civilizational, the question changes shape. It is no longer should we engage? It becomes how do we engage well enough to deserve what AI is making possible?

That second question is the one I find myself living inside. It is why my own work has moved closer to AI rather than further from it. The answer to how do we engage well turns out to be inseparable from purpose. A person without a clear sense of what they are for, handed god-like creative tools, is not empowered. They are overwhelmed. The tools amplify whatever is already there. If purpose is there, AI amplifies purpose. If confusion is there, AI amplifies confusion. Scale always finds what is underneath it.

This is why I think the grassroots level is where this gets decided. Not in policy chambers. Not in corporate boardrooms. In the lived discovery of personal purpose, one person at a time, one community at a time. Institutions calibrated to scarcity will not distribute abundance for us. They are not built to. The work is human, local, and specific — and the AI capacity that makes the work possible is already in everyone’s hands.

The proportionate response to a moment of civilizational possibility is not retreat. It is the patient, deliberate work of becoming the kind of people who can carry it.

That is what I am trying to do in my own corner of it. The fact that I am doing that work with AI — humanity in the loop, not merely human-in-the-loop — is no longer a contradiction to me. It is the point.


Notes

¹ On April 11, 2026, a 20-year-old man threw an incendiary device at the gate of OpenAI CEO Sam Altman’s home in San Francisco, then went to OpenAI’s Mission Bay headquarters and attempted to force entry. He was arrested and booked on suspicion of attempted murder, criminal threats, and possession of a destructive device. The previous November, OpenAI employees had been told to shelter in place after a separate threat at the company’s offices. See Washington Post, “Attack on Sam Altman’s San Francisco home prompts fears of AI division” (April 14, 2026); Fortune, “Attacks on Sam Altman’s home are extreme. But the AI backlash is going mainstream” (April 16, 2026).

² At least eleven U.S. states have introduced legislation since late 2025 to restrict, pause, or ban new data center construction. Maine became the first state poised to enact a statewide moratorium, with its legislature passing a bill in April 2026 halting new projects requiring 20 megawatts or more of power until November 2027. Similar moratorium bills are active in Georgia, Maryland, Minnesota, New York, South Dakota, Vermont, Virginia, Wisconsin, and others. Between April and June 2025 alone, twenty proposed data center projects worth a combined ninety-eight billion dollars were blocked or delayed by local resistance. See Axios, “These states don’t want data centers in their backyards” (April 5, 2026); Built In, “States Push Data Center Moratoriums as AI Growth Surges” (April 2026); Good Jobs First, “Data Center Moratorium Bills Are Spreading in 2026.”

³ At the federal level, Senator Bernie Sanders and Representative Alexandria Ocasio-Cortez introduced the Artificial Intelligence Data Center Moratorium Act of 2026 (S. 4214) on March 25, 2026, calling for an immediate nationwide pause on new AI data center construction until comprehensive federal safeguards are enacted. See Office of Senator Bernie Sanders, press release of March 25, 2026.