The Weekly Weird #29
No badge for you, Russia's Safe City surveillance system, give up privacy for magic beans, the EU's AI lie detector, Network Rail gets emotional
Welcome back to your weekly wending along the beach of bleakness!
First off, apologies for this week’s Weird hitting your inbox a bit later than usual. A wise human once said “Never ruin an apology with an excuse”, so I won’t tell you why, but I apologise for keeping you waiting. I’m assuming you were waiting, maybe I think too much of myself…
The next episode of the podcast comes out this Sunday - I speak with Steve Endacott, the British entrepreneur behind AI Steve, the UK’s first AI parliamentary candidate. It was great fun chatting with him; he had some unsurprising criticism of the state of politics along with some interesting thoughts on AI, the future of elections, a tech-fuelled employment cataclysm…good times!
Anyway, you’ve been patient, let us not tarry.
No Badge For You
The British Army has pulled a reverse-Oprah: “You don’t get a badge, you don’t get a badge, nobody gets a badge!”
The Financial Times reported this week that military badges to be produced to mark the shift in monarch from Liz 2 to Chuck 3 have been deemed problematic by defense officials because the British company making them outsources to China.
“There is a fear that tracking devices or a GPS transmitter could be embedded in the cap badges,” a senior UK defence official said.
A minor question of fashion, perhaps, but the FT opines that it speaks to a deeper question abroad in the land:
The issue encapsulates a broader confusion among western countries about whether to treat China, the world’s second-biggest economy, as a friendly trading partner or an implacable foe.
Can’t it be both? They’ll be friendly trading with us until we don’t want them to invade Taiwan, or genocide the Uyghurs, or continue their occupation of Tibet, or enforce their draconian lack-of-free-speech rules on their expatriates around the world, but if we object to what they do, it seems highly likely that they’ll be happy switching to the role of foe, although implacable seems hyperbolic. They’d probably be very placable as long as we’re laissez-faire about violence, slavery, Communism, oppression, you know, everything that makes a multipolar world order go round.
The parsimonious British establishment has begun to loosen the purse strings to keep China out of the supply chain:
Tobias Ellwood, former head of the UK parliament’s defence committee, said the issue had also recently bedevilled the cross-party committee when it decided to mint a series of “honour coins” to give to visiting dignitaries.
Ellwood said committee members had a heated debate about whether to “buy British” and make the coins in the UK, or to make them more cheaply in China at a fifth of the price. Ellwood said the committee eventually decided to buy British out of security concerns.
Excluding Huawei from critical infrastructure is one thing, but teeny tiny cap badges? Is anything too small to be deemed a threat? Would China really want to put tracking or listening devices in badges?
In short, as another official told the FT, ‘managing relations with China [is] “going to get weirder before they get clearer.”’
Russia’s Safe City Surveillance System
Interfax reports that The Ministry of Finance plans to create a single video surveillance platform in the Russian Federation (original in Russian, translated by Google).
The project is being called Safe City, “a national platform for storing and processing information from all urban video surveillance systems of the Russian Federation.”
In mid-March, the head of the Ministry of Digital Maksut Shadaev said that every third of the video surveillance camera of more than 1 million installed within the framework of the Safe City in Russia is connected to the face recognition system, and all the cameras, who stand on the roads automatically recognize numbers. According to Shadaev, further video surveillance technologies, in addition to security, are used to control the content of landscaping facilities, clean streets, and timely garbage collection.
Russia’s surveillance cameras will recognise faces, track license plates on vehicles, and send feeds to a central repository monitored for all manner of ‘antisocial’ behaviour. Remember that this is a country where calling a war a war can get you 25 years in prison, where an American journalist is about to go on trial (in closed session) for spying, where you go to prison for holding up a Pride flag, and where critics of the government end up dead. Oh, and where banks are secretly collecting your biometric data.
As Nakanune put it:
People in Russia are short-sighted and still do not understand how beautiful, convenient and progressive biometrics are. Therefore, whether they want it or not — the authorities will implement it wherever possible.
Rad.
That might contextualise why Putin has been bromancing with Xi Jinping and just starred in the worst episode of Carpool Karaoke ever with Kim Jong-Un. Or was it a new episode of Dictators In Cars Making Propaganda? It’s hard to keep track.
Give Up Privacy For Magic Beans
Worldcoin. Holy moly, how did we get this far into the life of the Weekly Weird without talking about a crypto project brought to you by OpenAI CEO Sam Altman?
As Roger Huang at Forbes put it:
Worldcoin presents a dystopian version of what it means to be human in an age of AI — giving people a small flow of tokens worth whatever the cryptocurrency market will bear in return for having their iris scanned.
Worldcoin kicks out all sorts of vibes, and none of them are good.
Their website has strong Bowfinger MindHead energy:
They use a creepy device called The Orb to scan your iris in exchange for their magic beans:
It is run by Sam Altman, known for his sanguine attitude towards developing artificial general intelligence, a technology that might destroy all life on Earth if it goes wrong and, if it goes right, make redundant the vast majority of human labour.
Fortune have a good explanation of how Worldcoin works:
It hopes to authenticate one’s “humanness” in an age of bots and deepfakes by asking users to stare into an orb, which then converts one’s biometric image into an impenetrable string of numbers. When combined with an algorithm, the code verifies an individual as a unique human, providing confirmation via a World ID, which is stored in one’s World App. As a reward for the scan, users can be airdropped Worldcoin tokens, which are currently priced around $5.
Privacy for magic beans, the hottest trade of the 21st century. Who would sign up for this?
In an April 2024 article, Fortune reported that “10 million people across 160 countries have enlisted their eyeballs, and the verified wallets have been used in 75 million transactions.”
As a humorous aside, “Worldcoin user transactions exist on OP Mainnet, formerly called Optimism, which is the main blockchain network in the Optimism ecosystem.”
Yeah, a global ID system with “no business plan” that pays people for their biometric data with out-of-thin-air tokens literally runs on Optimism. If you had to make it up, nobody would believe it.
Biometric Update reported this week that the Kenyan government “have dropped their investigation into the iris biometrics and digital identity company…after suspending the firm’s activities last year amid allegations that it was violating privacy laws and endangering state security.”
The ruling opens the door for Worldcoin to once again begin collecting iris biometric scans from Kenyans in exchange for WLD cryptocurrency tokens. Should it stick as a validation of the company’s activities, it could have a ripple effect on the host of other countries that are taking a microscope to Worldcoin’s biometrics business. Many have expressed concerns about communications and consent, and some have asked whether Worldcoin has collected biometrics from minors.
Worldcoin’s Orb has been banned in the EU, except for Germany and Portugal. There are “roughly 20,000 verified World ID holders” in the latter country.
The entire purpose of Worldcoin is to further the uptake of World ID, which is intended to serve as “proof of personhood” online, in part to counteract bots that make interactions, transactions, and basically any actions in cyberspace a colossal pain, and to serve as a confirmation tool for humans who need to claim Universal Basic Income (UBI) after AI takes their jobs and renders them unemployable. Of course, Sam Altman also runs OpenAI, the company pushing out ChatGPT, an LLM that powers many of those bots, as well as various other generative AI ‘solutions’ that are increasing the swamping of the online world with synthetic dreck. The man creating the job-destroying AI wants to gets you into his crypto ecosystem to help you afterwards. A regular Man of the Year.
As the saying goes, perhaps the firefighter and the arsonist are natural allies.
Or, as Fortune phrased it, “others have noted how Altman, the cofounder and chairman of Tools for Humanity, is selling a solution to issues accelerated by another of his companies, OpenAI.”
Meanwhile, Worldcoin are running out of orbs. What a perfectly normal sentence.
The EU’s AI Lie Detector
In a horrible throwback to the dystopian novel The Truth Machine by James Halperin, the EU is planning to use what is being called a “controversial artificial intelligence lie detector” for border control.
The software analyzes facial movements and body gestures in order to flag suspicious behavior to immigration officers. The system could be incorporated at border checks at airports and ferry terminals as part of the EU’s upcoming border control schemes, the Entry-Exit System (EES) and European Travel Information and Authorisation System (ETIAS), according to The Mail on Sunday.
The EES is expected to take effect on October 6th while ETIAS will follow in 2025. Both travel schemes require non-EU visitors to submit biographic and biometric data to enter Schengen countries.
From The Mail:
Patrick Breyer, a German MEP, dismissed the 'lie detector' test as 'pseudoscience', saying it is not possible to determine if someone is lying from facial gestures. He added: 'It will discriminate against anyone who is disabled or who has an anxious personality. It will not work.'
Yeah, getting grilled by an AI before boarding the 6:15am Easyjet flight to Marbella with two screaming children definitely won’t enhance your calm.
Anyway, when has the efficacy of a surveillance technology been a determining factor in its deployment?
Seriously, I want to know. When? Find an example and tell me, I’ll wait.
How deep is the AI border testing going to go? Well, if the pilot programs, iBorderCtrl and TRESPASS, are any indication, it’s ugly.
The Mail again (emphasis mine):
One pilot scheme even checked an applicant's social media accounts before allowing them entry, raising fears political or controversial comments made on X or Facebook could lead to someone being banned.
In the iBorderCtrl trial, carried out between 2016 and 2019 in Greece, Hungary and Latvia, avatars were used to interview applicants and monitor their expressions. TRESPASS, tested until November 2021, analysed 'facial expressions, gestures and body postures' to assess if a 'traveller is telling the truth', according to official papers.
The Artificial Intelligence Act does not ban border agencies from using such technology.
As Biometric Update put it:
According to the EU AI Act, emotion recognition is defined as a high-risk AI system. Critics, however, point out that the regulation leaves space for its use in law enforcement and migration control.
Where will all this data go once it is gathered? If the EU is gathering fingerprints, iris scans, voice recordings, facial samples, they must want to hang on to them for a while for the purposes of enforcement and tracking, right?
Back to The Mail (emphasis mine):
The EU is also creating a super-database called the Common Identity Repository (CIR), which will hold 300 million records of people, including terrorists and criminals. The data of all Britons entering the EU will end up in the CIR.
Brilliant.
Network Rail Gets Emotional
Wired broke a story this week detailing the use of Amazon AI-enabled surveillance cameras at UK rail stations, about which of course the public knew nothing.
Thousands of people catching trains in the United Kingdom likely had their faces scanned by Amazon software as part of widespread artificial intelligence trials, new documents reveal. The image recognition system was used to predict travelers’ age, gender, and potential emotions—with the suggestion that the data could be used in advertising systems in the future.
Surveillance is nothing new in the UK, and especially in London, the only “winner” in the World’s Top 10 Most-Surveilled Cities that isn’t in China. Even so, the encroachment is becoming so full-on as to make writing about it an RSI risk.
More from Wired:
During the past two years, eight train stations around the UK—including large stations such as London’s Euston and Waterloo, Manchester Piccadilly, and other smaller stations—have tested AI surveillance technology with CCTV cameras with the aim of alerting staff to safety incidents and potentially reducing certain types of crime.
The extensive trials, overseen by rail infrastructure body Network Rail, have used object recognition—a type of machine learning that can identify items in videofeeds—to detect people trespassing on tracks, monitor and predict platform overcrowding, identify antisocial behavior (“running, shouting, skateboarding, smoking”), and spot potential bike thieves. Separate trials have used wireless sensors to detect slippery floors, full bins, and drains that may overflow.
Our friends at Big Brother Watch pursued a freedom-of-information request that yielded documents related to the covert operation to monitor passengers and passers-by for fun and profit.
According to the documents, this setup could use images from the cameras to produce a “statistical analysis of age range and male/female demographics,” and is also able to “analyze for emotion” such as “happy, sad, and angry.”
The use cases given by Network Rail in the documents are legion, but does that merit this level of intrusive surveillance? Is trespass prevention, general safety, anti-theft vigilance, service efficiency, the counting of passengers, or noticing when floors get slippery impossible or too difficult to do without AI-enabled CCTV?
At what point did delivering the service of public transport begin to require military-grade surveillance technology? Can’t we just take a train without worrying if our emotions, face, identity, and location are being tracked, or whether our umbrella will trigger a false positive for a weapon and lead to law enforcement intervention?
I’ll let an expert quoted in Wired have the final word on this:
“Systems that do not identify people are better than those that do, but I do worry about a slippery slope,” says Carissa Véliz, an associate professor in psychology at the Institute for Ethics in AI, at the University of Oxford.
[…]
“There is a very instinctive drive to expand surveillance,” Véliz says. “Human beings like seeing more, seeing further. But surveillance leads to control, and control to a loss of freedom that threatens liberal democracies.”
That’s it for this week, everyone. Thanks as always for reading.
Outro music is the soothing sound of Patsy Cline singing Crazy, a song about feeling lonely and blue, only the latter of which is possible in a world where the cycloptic gaze of a camera is always upon you.
Worry
Why do I let myself worry?
Wondering
What in the world did I do?
Stay sane out there, friends.
I didn’t know you were late on today’s post because I am perpetually behind 😂 these days.
Thank God I chose to live in a tiny, mountain town and am 25 minutes away from said town, off grid.
Driving the highways and byways will be my mode of travel for the foreseeable future and avoiding as much as possible, large cities, public transportation, etc…unfortunately my mom lives in a gigantic city so I can’t avoid visiting her and since it’s a red state hopefully the dreaded 15 minute city won’t rear its ugly head any time soon!
Thanks for sharing this week’s weirdness.
PS: I was thinking that in the near future events may necessitate a name change to the Weekly Weirder…🫣