The February Weird
#77: Ind-AI, regulators mount up, (Department of) War of Words, "you" is more than just you, the trailer trail, Internet-e-Tabaqati (Iran's "class-based internet")
Welcome back to your Weird, our walk through the undulating undercarriage of the dystopian machine, where we peel the oppressive onion and try not to cry.
Let’s get stuck in!
Ind-AI
India’s AI Impact Summit in New Delhi was marred by gridlocked traffic, “a “VIP culture” at the summit that left people feeling like third-class citizens”, an ironic ban on laptops, and, perfectly in keeping with the relentless overselling of all things AI, “an argument over a fraudulent robot-dog”.
Staff from Indian university Galgotias had apparently been presenting a commercially available Chinese-made robot dog as their own creation at their booth. Government sources confirmed to Fortune that they had asked the university to leave the premises following the revelation.
Controversy struck as Bill Gates bowed out “owing to renewed scrutiny of his ties to the late financier and convicted sex trafficker Jeffrey Epstein”, and “OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei [stole] the spotlight from Modi by refusing to hold hands for a photo op designed to be a show of unity and triumph”.
It’s good to know such mature men are in charge of the world’s most disruptive technology.
Regulators, Mount Up
As per ETC Journal, February 2026 is “a critical month in the battle over who controls AI regulation in America”.
California, Texas, Colorado, New York, and Illinois have all enacted significant AI regulations scheduled to take effect in early 2026, covering everything from frontier model safety to employment discrimination to companion chatbots. But on December 11, 2025, President Trump signed an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence” that essentially declared war on state-level AI laws. The order directs the Attorney General to establish an AI Litigation Task Force “whose sole responsibility shall be to challenge State AI laws inconsistent” with federal policy, and it orders the Commerce Department to identify “burdensome state AI laws” within ninety days—a deadline that falls in early March 2026.
Will states’ rights win the day, or can the executive branch seize control of the administrative agenda for a technology already being embedded in and changing the workings of every facet of digital technology?
If the first sixty days of 2026 see the federal government and major states escalating rhetoric and filing dueling enforcement actions, trust between levels of government collapses and the possibility of negotiated compromise fades. But if cooler heads prevail—perhaps with industry groups brokering conversations, or with states and feds quietly working out modus vivendi on enforcement priorities—then there’s still a path to sensible coordination. The clock is ticking, because AI capabilities are advancing faster than any litigation timeline, and the companies building frontier systems need to know the rules of the road now, not in three years when appeals courts finally weigh in.
Well, the recently rebranded Department of War has decided it likes fighting so much that it will pick a battle with its own supplier…
(Department of) War of Words
"Legality is the Pentagon's responsibility as the end user"
As per the Artificial Intelligence Strategy for the Department of War, published in January 2026, the official policy of the US government with regards to the military deployment and integration of AI is that “Military AI is going to be a race for the foreseeable future, and therefore speed wins”, so “the risks of not moving fast enough outweigh the risks of imperfect alignment”.
In plain English, the Zuckerbergian exhortation to “move fast and break things” has become Pentagon technology policy.
The AI-maximalist stance taken by Secretary of War Pete Hegseth (pictured below having smelt it but not admitting that he dealt it) has put the Pentagon in conflict with one of their major suppliers, the AI firm Anthropic.
Specifically, the problem arises from Hegseth’s policy on AI guardrails as outlined on page 5 of the above strategy document (emphasis mine):
The Department must also utilize models free from usage policy constraints that may limit lawful military applications.
[…]
I direct the Under Secretary of War for Acquisition and Sustainment to incorporate standard "any lawful use" language into any Do W contract through which AI services are procured within 180 days.
NPR quoted a source who dropped a rather spicy interpretation:
"Legality is the Pentagon's responsibility as the end user," a senior Pentagon official who declined to give their name told NPR this week.
It’s worth noting the wording: Legality is the Pentagon’s responsibility, but not necessarily its aim or intention.
The argument between Hegseth’s Department of War and the nerds in the nerdery heated up when Anthropic CEO Dario Amodei released a public statement on the contract dispute.
“I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries,” it begins.
We were the first frontier AI company to deploy our models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.
“We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner,” Amodei continues. “However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.” Some use cases, he adds, “are also simply outside the bounds of what today’s technology can safely and reliably do.”
“Mass domestic surveillance,” Amodei writes, “is incompatible with democratic values” and “AI-driven mass surveillance presents serious, novel risks to our fundamental liberties”. As for usage beyond the capability of today’s AI, Amodei is adamant that “frontier AI systems are simply not reliable enough to power fully autonomous weapons”.
Amodei then sets out the crux of the conflict between the Pentagon and Anthropic:
The Department of War has stated they will only contract with AI companies who accede to “any lawful use” and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.
“Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place,” the statement concludes.
According to Pentagon spokesman Sean Parnell, Anthropic have been given until 5:01pm on Friday, 27 February to decide. “Otherwise,” he posted on X, “we will terminate our partnership with Anthropic and deem them a supply chain risk”.
As for the allegations made by Amodei that the contract beef is a matter of principle and conscience, Parnell minced no words:
The Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement. This narrative is fake and being peddled by leftists in the media.…
If the narrative is fake, why is Anthropic risking $200 million in defense contracts? Or is the Pentagon crying “woke” when a tech company draws the line as mass domestic surveillance and fully autonomous weapons?
Call me an uptight grammarian, but I find it hard to trust someone who uses four full stops to form an ellipsis. You decide for yourself.
Story ongoing.
“You” Is More Than Just You
The eyes of the state can be on you at all times.
The UK Home Office has published a response to its consultation on “a new legal framework for law enforcement use of biometrics, facial recognition and similar technologies”, by Professor William Webster, the Biometrics and Surveillance Camera Commissioner.
As described in The January Weird (#76), Home Secretary Shabana Mahmood recently unveiled a bold plan to expand and centralise policing in Britain, a strategy with “the largest-ever roll-out of live facial recognition technologies, across England and Wales” at its heart.
That segment (Britannia Waives The Rules) also pointed out consistent similarities between policy suggestions and reports put out by the Tony Blair Institute (TBI) and proposals made by the current Labour government.
There’s more.
As per The Telegraph and confirmed by TBI’s social media, Shabana Mahmood was a guest at the TBI’s Christmas party on 10 December 2025, and she was interviewed by the man himself.
In that interview, as quoted in The Telegraph, Mahmood said the following to Blair:
“When I was in justice, my ultimate vision for that part of the criminal justice system was to achieve, by means of AI and technology, what Jeremy Bentham tried to do with his Panopticon. That is that the eyes of the state can be on you at all times.”
I had to read that quote a few times and then search the web to make sure it hadn’t been fabricated, because even a weary cynic such as your humble correspondent couldn’t believe that a sitting politician had said that on the record, in public, and not been pilloried in the press and run out of town on a rail.
I don’t know about you, but the current location of the Overton Window is certainly outside of my comfort zone.
So the Home Secretary wants “the eyes of the state” to be on you at all times.
But since we’re dealing with lawmakers, what definition of “you” are they using?
In Section 4 of Professor Webster’s response to the Home Office consultation, he gives a chilling rundown on what information the government wants to gather on its citizens:
The proposition in the consultation is that the framework will capture a range of biometric and inferential technologies used in law enforcement. Technologies under consideration are: [1] biometric technologies (e.g. fingerprints, DNA, facial images, voice, irises, and gait); [2] inferential technologies (e.g. emotions, behavioural and actions); and [3] object recognition (e.g. bags, vehicles (ANPR), clothing, hats and weapons). It is important to note that these technologies go beyond personal identification and include other purposes like tracking and predicting behaviour. The technology types being considered also go beyond biometrics and inferential technologies to include a range of digital data process and databases, including the use of Artificial Intelligence (AI).
As described above, the avowed intent of this plan, this policy, this direction for law enforcement, is the expansion of “you” to include not merely your actions after the facts and your physical attributes for the purpose of record-keeping if you fall foul of the law, but your genetics, your biometrics, your facial telemetry, the way you walk, the sound of your voice, your feelings, and your belongings, recorded, linked, and tracked to permit algorithmic predictions and analysis of everything that makes up “you” so that the state can determine whether, according to whatever opaque standards and metrics it decides to use at any particular time, you are likely to commit an offence in the future, or likely to have ever committed one. Because while we watch the Minority Report comparisons come rolling in, it’s easy to forget that once all that data is stored somewhere, it is accessible forever. What isn’t illegal now may become so in the future.
Or, as phrased by Webster in his response (emphasis mine):
The framework must also take into account that these technologies can be used for intelligence gathering, investigations and for evidential purposes, that they may be used in real-time or retrospectively, and that they can also be used for covert surveillance.
Bentham himself described the Panopticon as “a new mode of obtaining power of mind over mind, in a quantity hitherto without example”.
No great shock that any politician, least of all one currently in charge of Britain’s police and border forces, would love such a proposal.
What about the rest of us?
The Trailer Trail
“If you’re not doing anything illegal, why worry about it?”
CalMatters and KPBS report that a motorist who stopped to take a closer look at what he thought was an abandoned trailer on a two-lane road in southern California “found a hidden camera feeding a vast surveillance network that logs the license plate of every driver passing through this stretch of remote backcountry between San Diego and the Arizona state line.”
James Cordero “found dozens of these cameras hidden in trailers and construction barrels on border roads around San Diego and Imperial counties: one on Old Highway 80 near Jacumba Hot Springs; another outside the Golden Acorn Casino in Campo; another along Interstate 8 toward In-Ko-Pah Gorge.”
They started showing up after California granted permits to the Border Patrol and other federal agencies to place license plate readers on state highways in the last months of the Biden administration. Now as many as 40 are feeding information into Trump administration databases as the Democratic-led state chafes over the federal government’s massive deportation program.
The cameras are raising concerns with privacy experts, civil liberties advocates and humanitarian aid workers who say California should not be supporting the surveillance and data-collection program, which they view as an unwarranted government intrusion into the lives of Americans who’ve committed no crime. Moreover, they say the program conflicts with state law.
The Electronic Frontier Foundation (EFF) has gone as far as mapping 40 hidden cameras placed by federal agencies along California’s roads, mostly near the southern border.
In November 2025, the Associated Press (AP) reported on US Border Patrol “monitoring millions of American drivers nationwide in a secretive program to identify and detain people whose travel patterns it deems suspicious”.
From the AP investigation:
The predictive intelligence program has resulted in people being stopped, searched and in some cases arrested. A network of cameras scans and records vehicle license plate information, and an algorithm flags vehicles deemed suspicious based on where they came from, where they were going and which route they took. Federal agents in turn may then flag local law enforcement.
Suddenly, drivers find themselves pulled over — often for reasons cited such as speeding, failure to signal, the wrong window tint or even a dangling air freshener blocking the view. They are then aggressively questioned and searched, with no inkling that the roads they drove put them on law enforcement’s radar.
Once limited to policing the nation’s boundaries, the Border Patrol has built a surveillance system stretching into the country’s interior that can monitor ordinary Americans’ daily actions and connections for anomalies instead of simply targeting wanted suspects.
Back in southern California, a local grandmother ended up being questioned because of the frequency with which she visited casinos.
Sergio Ojeda, a community organizer with the mutual aid group Imperial Valley Equity and Justice said U.S. Customs and Border Protection apparently believed his grandmother’s driving patterns were suspicious because they interrogated her about the amount of time she spends at local casinos in the area.
“She was outraged about it,” said Ojeda. His grandmother, a resident of Imperial Valley with legal status, was crossing the border when agents asked her about her trips to casinos.
“She asked them back, ‘Is something wrong with that? Am I not supposed to be doing that or why are you questioning me about this?’ and they were like “Oh, no, it just seems suspicious,” Ojeda recounted.
Ojeda said he was equally concerned, and he doesn’t enjoy the feeling of being watched just because he lives near the border. “It’s how I feel every day,” he said. “Driving around, I joke with my co-workers: ‘Which chapter of 1984 is this?’”
Well done, America. Keep up the good work.
Internet-e-Tabaqati
“In the context of Iran, the internet is the only independent witness to history.”
Internet-e-Tabaqati is the phrase used to describe the cyberspace policy of the Islamic Republic of Iran, and in English it means “class-based internet”.
Writing in Foreign Policy, security technologist Bruce Schneier describes Iran’s internet shutdown in response to the protests that broke out in December 2025:
Mobile networks, text messaging services, and landlines were disabled—even Starlink was blocked. And when a few domestic services became available, the state surgically removed social features, such as comment sections on news sites and chat boxes in online marketplaces. The objective seems clear. The Iranian government aimed to atomize the population, preventing not just the flow of information out of the country but the coordination of any activity within it.
The problem isn’t just the shutdown itself, but the suite of policies and the overarching authoritarian attitude that preceded and enabled it.
The ability to disconnect a population is a feature of modern authoritarian network design. When a government treats connectivity as a faucet it can turn off at will, it asserts that the right to speak, assemble, and access information is revocable. The human right to the internet is not just about bandwidth; it is about the right to exist within the modern public square. Iran’s actions deny its citizens this existence, reducing them to subjects who can be silenced—and authoritarian governments elsewhere are taking note.
The mechanisms of control put in place by the Iranian regime, which contributed to the unrest, were used to try to stamp it out (emphasis mine):
In July 2025, the council passed a regulation formally institutionalizing a two-tiered hierarchy. Under this system, access to the global internet is no longer a default for citizens, but instead a privilege granted based on loyalty and professional necessity. The implementation includes such things as “white SIM cards”: special mobile lines issued to government officials, security forces, and approved journalists that bypass the state’s filtering apparatus entirely.
While ordinary Iranians are forced to navigate a maze of unstable VPNs and blocked ports, holders of white SIMs enjoy unrestricted access to Instagram, Telegram, and WhatsApp. This tiered access is further enforced through whitelisting at the data center level, creating a digital apartheid where connectivity is a reward for compliance. The regime’s goal is to make the cost of a general shutdown manageable by ensuring that the state and its loyalists remain connected while plunging the public into darkness. (In the latest shutdown, for instance, white SIM holders regained connectivity earlier than the general population.)
The technical architecture of Iran’s shutdown reveals its primary purpose: social control through isolation.
The brittle nature of the regime’s control is shown by the blanket approach to censorship:
By disabling local communication channels, the state prevents the “swarm” dynamics of modern unrest, where small protests coalesce into large movements through real-time coordination. In this way, the shutdown breaks the psychological momentum of the protests. The blocking of chat functions in nonpolitical apps (like ridesharing or shopping platforms) illustrates the regime’s paranoia: Any channel that allows two people to exchange text is seen as a threat.
Schneier pinpoints the danger of Iran’s approach; unlike in China, where the state replaced the internet with its own controllable alternatives like WeChat, “Iran’s overlay model is highly exportable.”
It demonstrates to other authoritarian regimes that they can still achieve high levels of control by retrofitting their existing networks. We are already seeing signs of “authoritarian learning,” where techniques tested in Tehran are being studied by regimes in unstable democracies and dictatorships alike. The most recent shutdown in Afghanistan, for example, was more sophisticated than previous ones. If Iran succeeds in normalizing tiered access to the internet, we can expect to see similar white SIM policies and tiered access models proliferate globally.
Indeed. As reported by Internet Society, “Gabon's High Authority for Communication (HAC) has ordered the indefinite suspension of major social media networks to "protect national unity and prevent social conflict" amid local unrest.” Facebook, Facebook Messenger, Instagram, Telegram, TikTok and YouTube are all banned until further notice “to prevent the "spread of false information" amid a wave of cost-of-living protests.”
Whether it is protest in Iran, the “spread of false information” in Gabon, “disinformation” in the EU, or “hate” in the UK, efforts are converging internationally to limit and control internet access and the peer-to-peer flow of information.
The decentralised structure of the internet is being undone by the centralising impulse and institutional paranoia of governments high on their own certainty that they know what’s best, and that citizens meeting and sharing information and ideas without their approval or oversight is inherently dangerous.
Tick tock, folks.
That’s it for this month’s Weird, everyone. I hope you enjoyed it.
Outro music is Communication Breakdown by Led Zeppelin, dedicated to everyone who can’t listen to it right now because their government has kicked them off the internet. May a brighter day come when we can all rock out together.
Stay sane, friends.
Communication breakdown
It's always the same
I'm having a nervous breakdown
Drive me insane





