Pull the Plug: Inside the Anti-AI Protest Movement That Is Growing Faster Than Anyone Expected
On Saturday February 28, 2026, a couple of hundred people walked through the streets of King’s Cross in London chanting slogans and carrying handmade signs. The neighbourhood they marched through is not an accident of geography. King’s Cross is home to the UK headquarters of OpenAI, Meta, and Google DeepMind, three of the most powerful artificial intelligence organisations on the planet. The protesters were there to make a point directly to the institutions whose work they believe poses the greatest risk to human society in the history of civilisation. The march was organised by two activist groups, Pause AI and Pull the Plug, and was billed as one of the largest protests of its kind ever held.
The event was covered by MIT Technology Review, whose reporter walked the route with protesters and spoke to organisers and participants about their motivations, their fears, and their sense of whether any of it would actually make a difference. What he found was a movement that is more organised, more strategically minded, and more rapidly growing than its relatively modest crowd size might suggest.
The first thing to understand about this protest is what it was not. It was not a single-issue demonstration. The concerns on display ranged dramatically in scale and nature, from the immediate and concrete to the long-term and existential. One chemistry researcher ticked off grievances that included the spread of AI-generated content online making it harder to find reliable academic sources, a problem that anyone who uses the internet for research will recognise. Another protester, an older man wearing a sandwich board, focused on the economic consequences of automation and the risk of mass unemployment. Others carried signs about abusive AI-generated imagery, autonomous weapons systems, and the possibility of human extinction driven by AI systems that exceed human control. One sign simply read “Stop using AI.” Another read “EXTINCTION=BAD.”
This breadth of concern is both a strength and a challenge for the movement. It means the tent is wide. Almost anyone who has reservations about the pace or direction of AI development can find something to identify with at a Pause AI march. But it also means the movement does not speak with a single unified voice, which makes it harder to press governments or companies toward specific policy changes. Most of the protesters MIT Technology Review spoke to said their primary goal was awareness rather than direct corporate pressure, and for good reason.
Maxime Fournes, the global head of Pause AI, is a former AI industry insider who spent twelve years working in the field before concluding that the technology’s trajectory was dangerous enough to leave his career and dedicate himself to activism full-time. His assessment of whether protests can directly change the behaviour of technology companies is blunt. He does not think they can. The companies, he said, are optimised simply not to care. Instead, Fournes is focused on a longer-term strategic approach that targets the inputs rather than the outputs of the AI industry. His goal is to make working in AI a less attractive career choice, to support whistleblowers who want to speak out about what is happening inside AI labs, and to build enough public awareness that governments feel genuine political pressure to introduce meaningful regulation. Dry up the talent pipeline, change the public narrative, and make the cost of ignoring safety concerns high enough that companies can no longer afford to do so.
Joseph Miller, who heads the UK branch of Pause AI and co-organised the march, brings a different but complementary perspective. Miller is a PhD student at Oxford University studying mechanistic interpretability, one of the most technically demanding and important fields in contemporary AI safety research. Mechanistic interpretability is the attempt to understand, at a granular level, exactly what is happening inside a large language model when it processes a prompt and generates a response. It is painstaking, complex work, and Miller’s immersion in it has led him to a troubling conclusion. He believes the technology may be permanently beyond meaningful human understanding and control, and that this fundamental opacity carries catastrophic potential.
Crucially, Miller does not rely on dramatic science-fiction scenarios to make his case. He does not need to invoke a rogue superintelligence or a machine that consciously decides to turn against humanity. He points instead to something more plausible and more immediately concerning. If AI systems are integrated into the command and control infrastructure for nuclear weapons, the bar for disaster is not superintelligence. It is a sufficiently capable system making a sufficiently bad decision in a sufficiently critical moment, with no human able to intervene in time. The less rational the decisions that humans make about AI deployment, he argues, the less powerful the AI needs to be before something goes catastrophically wrong.
The timing of the march gave Miller’s concerns an uncomfortable currency. In the same week that protesters gathered in King’s Cross, the US government was attempting to force Anthropic, the AI safety company behind the Claude family of large language models, to remove all restrictions on military use of its technology. The standoff ended with Anthropic refusing and the Pentagon signing a new classified deal with OpenAI instead. For many at the march, that sequence of events was not an abstract geopolitical story. It was a live demonstration of exactly the kind of unchecked AI deployment they had come to protest against.
For Matilda da Rui, another co-organiser of the march and a member of Pause AI, the stakes of the AI question are binary in the most literal sense. She believes the technology will either allow humanity to solve every problem it faces, permanently and comprehensively, or it will end human civilisation entirely. There is no moderate outcome in her view. She told the MIT Technology Review reporter that it is a mystery to her why anyone who genuinely understood the problem would be focused on anything else. That level of conviction is unusual in mainstream political discourse. It is not unusual in the AI safety community, where a significant number of serious researchers hold comparably stark views about the long-term risk profile of the technology.
The atmosphere of the march, however, did not match the apocalyptic framing of some of its participants. The MIT Technology Review reporter described the mood as pleasant and even fun, with little sense of the existential urgency that motivated many of those taking part. This is partly a function of the diversity of the crowd. When the concerns range from the very immediate to the extremely long-term, it is difficult to sustain a single emotional register across a large group of people. A person who is primarily concerned about AI-generated academic slop making their research harder is in a different emotional state from someone who believes the species faces extinction.
There was also a practical accessibility to the event that may have broadened participation beyond the committed activist core. Organisers had pitched the march as a social event, welcoming anyone who was curious about the issues to come along and see for themselves. This produced some participants who were there primarily for the experience rather than the cause. One man who worked in finance told the reporter he had come along with his roommate on a Saturday when he did not have much else to do. He said he found it hard to see how anyone could totally oppose what the protesters were marching for, describing it as a fundamentally reasonable set of concerns rather than a politically divisive position. Unlike a protest on an issue where people hold strong opposing views, he suggested, anti-AI sentiment sits in a space where most people can find at least partial common ground.
That framing is interesting and worth examining. It is true that opposition to AI harms, in the abstract, is easy to agree with. Most people would say they are against abusive AI-generated images, against autonomous weapons making lethal decisions without human oversight, and against the spread of low-quality machine-generated content degrading the information environment. The harder questions involve trade-offs. What regulation is proportionate? Who enforces it and how? What legitimate benefits of AI development are you willing to slow or sacrifice in exchange for reduced risk? These are genuinely difficult questions that the protest movement has not yet fully answered, and they are the questions that governments and policymakers will need to grapple with as the calls for regulation grow louder.
The march ended in a church hall in Bloomsbury, where attendees gathered to network, share ideas, and discuss next steps. Name stickers were distributed. People introduced themselves to strangers. The atmosphere was more community meeting than revolutionary congress. But the movement that produced the gathering is more serious and more strategically developed than a casual observer might assume. Pause AI is an international organisation with chapters in multiple countries. It is growing rapidly. Its leadership includes people with deep technical knowledge of the systems they are protesting against. And it is operating in a political moment where public concern about AI is higher than it has ever been.
Whether street protests can translate into the regulatory change that participants say they want remains an open question. The technology companies whose London offices line the streets of King’s Cross have so far shown little sign of slowing down. But the people who walked past those offices on February 28, 2026, are no longer a handful of activists with hand-painted signs. They are the visible edge of a much larger conversation that is happening across universities, research labs, newsrooms, and parliaments around the world. The march may not have changed anything on the day. But the movement behind it is one that the AI industry, and the governments that are still working out how to respond to it, will need to take increasingly seriously.

Comments
Post a Comment