The Weekly Weird #64
Sawasdee-cop, Britain transitions to clarity, insane in the brain, Pokemon Go(es) to Saudi Arabia, 23andUS, Aussie unFaRT, WHO cares a lot, is our children learning?
Welcome once again to your Weekly Weird, the world’s foremost1 factually accurate but resolutely sarcastic dystopian news roundup!
Never let it be said that the Weird doesn’t move with the times. In this week’s edition, I’ve experimented with creating songs for some of the items using our friendly neighbourhood generative AI. Let me know what you think. Maybe we’ll have enough tunes by the end of the year to put out a Christmas album.
Podcast stuff before we get into this week’s dystopian doings:
If you haven’t already, check out the latest episode. I’m joined by Dr Chris Day, an NHS whistleblower at (hopefully) the tail end of a decade-plus courtroom drama.
The next episode (coming this Sunday) is a conversation with Leni von Mayn, a self-described “sex-plorer” who provides the voice for the synthetic ladies at Berlin’s CyBrothel. For added context, you can check out my conversation with the founder, Philipp Fussenegger.
Let’s do some news!
Sawasdee-Cop: Thailand, often called the Land of Smiles, “has unveiled its first AI-powered police robot, deployed during the Songkran festival in Nakhon Pathom province”. The robo-cop has been given the snappy name of “Pol Col Nakhonpathom Plod Phai”, which means “Nakhon Pathom is safe”.
“The robot, dubbed “AI police cyborg 1.0”, is a Robocop-style robot officer equipped with smart 360-degree AI cameras…[integrating] live footage from CCTV cameras in and around the event area, as well as from drone surveillance, and process[ing] them using AI technology built into the cyborg’s system...[which is] linked directly to the province’s Command and Control Centre...”
“Using facial recognition technology, the cyborg is able to identify and notify officers if wanted or high-risk individuals are detected. Individuals can be searched and identified through their facial features, clothing, body type, and gender.”
Here’s a look at Thailand’s FaRT-enabled robocop:
Feel safe yet?
Here’s the newly minted police colonel looming over a crowd of revellers who are not, for whatever reason, fleeing in horror:
If the Thai citizenry can smile at that, they can smile at anything. I asked AI to generate a theme tune for a cop show about robots policing the streets, and it even wrote lyrics for it as well. Coming soon: Mechanical Justice!
Robot cops, keepin’ it tight
Servin’ up the law, day and night
Britain Transitions To Clarity: Speaking of the law, the UK’s Supreme Court has handed down a landmark ruling on the definition of sex in the context of the Equality Act of 2010. While reactions from all sides have been emotional and not entirely relevant to what the court actually wrote, I read the 88-page judgment and found it to be a well-reasoned and sensitive decision. Rather than a rescindment of trans rights, it is a clarification of legal definitions that confer and determine protections under existing law. By defining “sex” within the Equality Act as meaning “biological sex” rather than “certificated sex”, the court has extended rights that could previously have been denied to trans people under a “variable meaning” interpretation, ended the grey area around sexual orientation that has caused concern to women regarding their right to single-sex spaces, and made clear that the governments of Britain cannot choose and change what the word “sex” means in different sections of the same law.
In the words of the court:
“Interpreting “sex” as certificated sex would cut across the definitions of “man” and “woman” and thus the protected characteristic of sex in an incoherent way. It would create heterogeneous groupings. As a matter of ordinary language, the provisions relating to sex discrimination, and especially those relating to pregnancy and maternity, and to protection from risks specifically affecting women, can only be interpreted as referring to biological sex…”
“That [certificated] interpretation would also seriously weaken the protections given to those with the protected characteristic of sexual orientation for example by interfering with their ability to have lesbian-only spaces and associations.”
TL;DR:
“The meaning of the terms “sex”, “man” and “woman” in the EA 2010 is biological and not certificated sex. Any other interpretation would render the EA 2010 incoherent and impracticable to operate.”
The ruling does not legally affect the rights of trans people without a gender recognition certificate, as the case dealt only with the question of whether the definition of sex under the Equality Act was biological or certificated.
Here’s AI’s summary of the case, in musical form (lyrics by NotebookLM, music by Suno):
Insane In The Brain:
recently put out an interesting article on “motivated formal reasoning”, citing a 2022 study that found (shocker incoming) political bias adversely affects a subject’s ability to reason. When presented by researchers with political and non-political syllogisms2, “participants were significantly better at solving syllogisms when the correct answer was consistent with their own political beliefs, and significantly worse when it favored the opposing ideology.”“[L]eftists and social liberals performed worse on syllogisms where the correct answer was not aligned with leftist ideology, whereas general and rightists performed worse when the correct answer was not aligned with rightist ideology”
The good news is that “individuals who engage in more analytical thinking are somewhat immune to the reason-impairing effects of ideology”, although “the impact of analytical thinking was modest and inconsistent.”
In short, research shows that ideological bias is an obstacle to reason and understanding, regardless of the ‘side’ someone is on.
“That may be so,” I hear you ask. “But did you turn the study into rap lyrics and then get AI to drop it like it’s hot over a phat beat?”
Word.
Your viewpoint can mess with your brain
WHO Cares A Lot: “The nations of the world made history in Geneva today,” said WHO Director-General Tedros Adhanom Ghebreyesus last week, in reference to the finalising by WHO member states of the draft Pandemic Agreement first proposed during the Covid period. UN News explained that “the draft outlines a framework for strengthening international collaboration, equity and resilience in the face of future global health threats” but “states explicitly that nothing in the agreement gives WHO the authority to mandate health measures such as lockdowns, vaccination campaigns, or border closures.”
“Participating manufacturers will have to allocate 10% of their production of vaccines, therapeutics and diagnostics to the WHO. Another 10% will then be supplied at "affordable prices". Countries also approved the transfer of health technologies to poorer nations as long as it was "mutually agreed".”
“At the core of the agreement is a proposed Pathogen Access and Benefit-Sharing System (PABS), allowing the faster exchange of data between pharmaceutical companies.”
The proposed agreement is due to be presented to the UN’s General Assembly for consideration in May. No news coverage or press release has mentioned any provisions related to preventing or monitoring gain-of-function research, increasing and enforcing lab safety, or sanctioning a government that erases virus data and refuses to grant research and investigatory access to international scientists and fact-finding inspectors. “Lessons learned”, right?
Book of the Month: In my research for this week’s Weird, I accidentally found an opinion piece by Sarah Ditum, written at Easter 2013 for The Guardian, in which she sings the praises of the classic dystopian novel Riddley Walker by Russell Hoban. Hoban wrote the novel, set years after a nuclear apocalypse, in a unique vernacular intended to capture the essence of living, thinking, and writing in a language ravaged by the passage of time. If you haven’t read it, there’s no time like the present. Here’s a clip of the author talking about the book:
Is Our Children Learning?: “Most of our students are functionally illiterate,” says
in a recent essay that paints a bleak picture of the average American university student. As a professor with more than three decades of teaching experience, it’s a tough read.“I’m not saying our students just prefer genre books or graphic novels or whatever. No, our average graduate literally could not read a serious adult novel cover-to-cover and understand what they read. They just couldn’t do it. They don’t have the desire to try, the vocabulary to grasp what they read, and most certainly not the attention span to finish.”
Besides citing student disinterest, incapacity, and dishonesty as serious problems, the author also points out fundamental issues in their grasp of two out of the three Rs3: “Spelling is atrocious, grammar is random, and the correct use of apostrophes is cause for celebration. Worse is the resistance to original thought.”
The issue is compounded by a vicious cycle driven by the advent of LLMs, to which students have flocked in droves to crank out fabricated schoolwork and coast through their courses: “I can’t assign papers any more because I’ll just get AI back, and there’s nothing I can do to make it stop. Sadly, not writing exacerbates their illiteracy…”
Students using AI is no longer the only problem, either. As Justin Weinberg wrote for the Daily Nous last week, now some of the students are AI.
“You’ve been worried that your students have been using ChatGPT and other LLMs to cheat? Hahaha. How quaint. Time for an update. Here’s something new for you to worry about: criminal enterprises managing networks of AI-powered bots to impersonate students for the purposes of stealing financial aid.”
Voice of San Diego reported recently on this new trend in cybercrime, including in one instance a professor at Southwestern College who discovered that only 15 of the 104 students enrolled for her course were humans. “The rest, she’d concluded, were fake students, often referred to as bots.”
“The bots’ goal is to bilk state and federal financial aid money by enrolling in classes, and remaining enrolled in them, long enough for aid disbursements to go out. They often accomplish this by submitting AI-generated work. And because community colleges accept all applicants, they’ve been almost exclusively impacted by the fraud.”
How widespread is the problem?
“During 2024 alone, fraudulent students at California community colleges swindled more than $11 million in state and federal financial aid dollars — more than double what was stolen the year prior.
Last year, the state chancellor’s office estimated 25 percent of community college applicants were bots.”
Once the bots are rooted out and booted off the course, the trouble just starts all over again.
“Even after dropping the fraudulent students, though, the bot nightmare isn’t over.
As soon as seats open up in classes, professors often receive hundreds of nearly identical emails from purported students requesting they be added to the class. Those emails tended to ring some linguistic alarm bells. They feature clunky phrases that are uncommon for modern students to use like “I kindly request,” “warm regards,” or “I look forward to your positive response.” Much of that stilted language lines up with what she’s seen from the AI-generated content submitted by bot students. That mad bot-powered dash for enrollment has left some students unable to register for the classes they need. It has also given rise to a sort of whisper network, where professors recommend students reference them by name when trying to get added to other classes.”
Students don’t have to be fake to play fast and loose with ethical considerations, though. Justin Weinberg reported in 2023 on an experiment tried by Garret Merriam, an ethics professor who planted a doctored version of his course’s final exam on a well-known cheating site to see if his students would use “answers [that] were not just wrong, but obviously wrong to anyone who had paid attention in class.” He was stunned to discover that “40 out of 96 students looked at and used the planted final for at least a critical mass of questions.” That’s a cohort in which over 40% cheated…on an ethics course.
recently brought these elements together in a great piece about the impact of technology and other factors on poor educational outcomes, in which he also links to eye-opening studies by OpenAI and Microsoft that, in brief, indicate students are relying on ChatGPT to do schoolwork for them, and that “higher confidence in GenAI is associated with less critical thinking”.Heavy stuff, I know. But the real question is: Did I download both of those studies, use AI to synthesise them into song lyrics, and then generate a comedy song about how AI is melting the brains of the next generation?
Well, since you asked…
AI is a wizard, the work's done with ease
But brains might get mushy, like overripe peas
Aussie UnFaRT: A small crumb of consolation to the anti-FaRT crowd now…Biometric Update reported last week that “New South Wales police have deactivated facial recognition capabilities in their suspect identification system following concerns about its accuracy.”
“The facial recognition software developed by German facial recognition company Cognitec was disabled within the PhotoTrac Suspect Identification in February, the police said in a response to a Budget Estimates inquiry by the New South Wales (NSW) Parliament. The identification tool compares images of potential suspects caught on CCTV against mugshots and is part of the Suspect Identification System (SIS) used by the NSW Police Force.
The police did not list a reason for deactivating Cognitec’s algorithm, which has been in use since 2011. The move, however, comes after local media uncovered that Cognitec’s technology misidentified West Africans almost seven times more often than it did Europeans in NIST testing performed in August last year. The Face Recognition Technology Evaluation (FRTE) Demographic Differentials report shows the highest false match rate (FMR) for Cognitec’s algorithm among all demographic groups tested is 0.538 percent for West African females age 65 to 99.”
23andUS: We looked at the bankruptcy of the genetic data-miner 23andMe in Weird #61, but the story is still developing.
The US Department of Justice has imposed special conditions on the sale of the bankrupt company’s assets, according to Bloomberg:
“23andMe is restricted from selling genomic information to firms that are 50% or more owned by companies headquartered in a handful of countries including China, Russia and North Korea, U.S. Attorney Sayler Fleming said in a court filing. Businesses are also restricted from obtaining investments from people who live primarily in those countries, Fleming said.”
A Congressional press release also announced last week that, as “a direct-to-consumer genetic testing company in possession of millions of Americans’ personal DNA data”, 23andMe’s bankruptcy “potentially allows for the transfer of customers’ and family members’ personal data, including to malign foreign actors such as the Chinese Communist Party.” We knew that already, but the House Committee on Oversight and Government Reform led by Chairman James Comer on whose behalf the release was issued “is requesting documents and information today related to 23andMe’s bankruptcy filing and is calling on Board Member Anne Wojcicki to testify at a public hearing scheduled for May 6, 2025.” Are they genuinely concerned about protecting the privacy of their citizens, or just piggy-backing on the DoJ? We’ll find out after May 6.
Some Extra Weirdness: As part of our ongoing exploration of the impact AI is having on the internet, and the way that AI-generated content is slop-ifying the web, I got an at-first-glance-acceptable-but-on-reading-clearly-fake result in my search on the above story. A link from AInvest to a story on their site called Genetic Data and National Security: Why 23andMe's Bankruptcy is a Market Turning Point initially seemed legit, but while reading the article I got the ‘uncanny valley’ sense that often kicks in when I’m reading AI-generated text. Then I noticed an easy-to-miss tell regarding the “author” of the post:
Philip Carter has been busy…
AI-generated images for AI-generated stories by an AI-generated writer whose profile pic is AI-generated as well, all just there to pad out a website that advertises Aime, “Your AI Investment Assistant”.
The modern world is metatextually exhausting, isn’t it?
Pokemon Go(es) To Saudi Arabia
Pokemon Go, the geospatial game where people wander around staring at their phones, hoping to catch an animated character, has been sold to Scopely, “a wholly-owned subsidiary of a Saudi Arabian company called Savvy Games, which itself is owned by the Saudi Arabian government’s Public Investment Fund.”
The game involves the gathering of the location data of its users. After all, the clue is in the name: You can’t track someone geospatially if you don’t know where they are. Now, a company effectively owned by the Saudi Arabian government will (possibly) have access to all that information.
Scopely, Niantic, and Savvy Games have collectively published six separate blog posts about the $3.85 billion deal, none of which specifically address what is happening with the location data of Pokémon Go’s 100 million players and none of which address how location data collected in the future will be handled under Scopely and its Saudi Arabian owners.
Samantha Bradshaw and Dean Jackson, writing for Tech Policy Press, with more:
Niantic collects vast amounts of user data, and its terms of service agreements have been criticized for allowing broad access to this information. While the company has stated in its blog that data is collected to improve gameplay and mapping accuracy, the company has been less forthcoming about how it uses data internally. With Niantic’s recent acquisition by the Saudi government, it is also unclear how the sale will impact the data policies and practices for games like Pokémon Go and whether new ownership could lead to expanded data collection practices or applications of sensitive user information.
Location data is sensitive. The geospatial data collected by Pokemon Go is not just about mapping PokéStops: it provides insight into how people move through cities, which locations they frequent, and even patterns of congregation.
Similarly, the app uses smartphone cameras to capture images of potentially sensitive locations, including those that are not easily mapped, like the insides of houses and buildings. In recent years, Niantic has also been building a Visual Position System (VPS), which “uses a single image from a phone to determine its position and orientation using a 3D map built from people scanning interesting locations in our games and Scanverse.” In 2024, the company claimed to be collecting as many as a million new scans a week. Using these, it planned to build a “large geospatial model” to identify places in the real world and inside buildings, a far more powerful tool than satellite photography or even tools like GeoSpy, which can identify outdoor locations from a single image. If the company lacks sufficient images of a certain location, that’s easily remedied—send players there on a “scan mission” and watch the images roll in.
Niantic claims it will be spinning off its mapping project into a company called Niantic Spatial, but, as 404 Media reported, “Scopely’s direct, $50 million investment in it suggests that monetizing Pokémon Go players’ location data and AR camera scanning data is very much still part of the plan.”
Why would it be an issue for Saudi Arabia’s government to have access to that kind of data?
Bradshaw and Jackson again:
The Saudi government has a well-documented history of human rights abuses, government surveillance, and cyber operations. In 2018, the Saudi government used spyware to surveil, then murder, Washington Post journalist Jamal Khashoggi. Surveillance is rampant in Saudi Arabia, with the government frequently monitoring every aspect of its citizens' digital lives. The Saudi government has continued to invest in mass surveillance infrastructure, which has been used to spy on activists, journalists, and political dissidents. Strict cybercrime laws criminalize even mild expressions of dissent, with several individuals facing prison sentences for peaceful social media posts. The government has also been known to use state-sponsored troll armies and bot networks to harass and intimidate online activists to further its digital repression and censorship regime.
Bradshaw and Jackson go on to point out that “Saudi Arabia’s purchase of this popular mobile game highlights the increasingly evident futility of building a firewall between US user data and foreign companies.”
As 404 Media put it, “an already very complicated and vast location data ecosystem that was previously controlled by only one American company (Niantic) has now become a far more complicated location data ecosystem controlled [by] Scopely, an American company that is wholly owned by a Saudi Arabian conglomerate whose largest shareholder is the Saudi Arabian government.”
Will there really be meaningful separation between how the US and Saudi companies access and handle data? Once the data is hoovered up, can any outsider count on the companies involved to handle it sensitively, in accordance with the law, or even within their own guidelines as a commercial entity?
Bradshaw and Jackson propose an alternative:
A better way to protect user privacy is to set universal limits, applicable to foreign and domestic companies alike, about what data can be collected from users and how it can be used.
They can’t misuse what they don’t have, after all.
That’s it for this week’s Weird, everyone. Thank you as always for reading.
Outro music is Swedemason’s Paul McCartney Rabbit Hole, an ode to (ab)using a computer to make music.
Donk
Only?
An example given in the study, of a “logically invalid syllogism with a believable conclusion”:
“Premise 1: If birds have wings, then birds can fly.
Premise 2: Birds can fly.
Conclusion: Therefore, birds have wings.”
As the joke goes: Readin’, ‘ritin’, and ‘rithmetic.
Life is short, Mike please, no more AI generated music.
Good one Mike!
"...is our children learning?"
"no they isn't!"