The Weekly Weird #68
Actually Indian, don't abort and drive, NYC protects kids with AI, 1984 is problematic, airline intel, Spy High, Sordid Camera, we was robbed, AI-native universities, Uber's robo-taxis, FaRT
Welcome back to your Weekly Weird, our shared sandbox for exploring the deplorable and laughing at the inevitable. What else is there to be done?
This week was particularly funny/strange/horrible, so let’s get into it!
1984 Is Problematic: In the strongest contender so far for this year’s Irony Award, Newsweek reported last week that “[t]he introduction to the new edition [of Orwell’s novel Nineteen Eighty-Four], endorsed by Orwell's estate and written by the American author Dolen Perkins-Valdez” includes pointed criticisms of the book, with Perkins-Valdez calling out that the book “does not speak much to race and ethnicity”, and, because of that, “a sliver of connection can be difficult for someone like me to find”. She also writes that Winston Smith “reveals himself to be a problematic character” because, as per Orwell’s description of him, he “disliked nearly all women, and especially the young and pretty ones.”
The irony of the critique and especially the backlash it has inspired is double, because Orwell himself wrote an introduction to the collected works of Rudyard Kipling in which he pulled no punches:
Kipling is a jingo imperialist, he is morally insensitive and aesthetically disgusting. It is better to start by admitting that, and then to try to find out why it is that he survives while the refined people who have sniggered at him seem to wear so badly.
I may not agree with Perkins-Valdez’s assessment of arguably the most influential dystopian novel of all time, but I do think ol’ Eric Arthur Blair would be down with slapping a much-loved author around a bit while introducing them.
Rushdie Denied: Speaking of slapping a much-loved author around a bit, consistently-hunted novelist Salman Rushdie, who survived a near-fatal knife attack in 2022 that left him blind in one eye, pulled out of delivering the commencement speech at Claremont McKenna College “after student and local Muslim groups "condemned" Rushdie's invitation and said it was “disrespectful” for him to step foot on the campus”. Claremont College Muslim Students Association president Kumail Afshar was reportedly “surprised, relieved and happy” to hear of Rushdie’s decision not to participate.
Actually Indian: First up is this little tidbit that calls back nicely to Episode 126 of the podcast, in which I spoke with Nikhil Suresh about the cons and cons of AI. At one point in our chat, he said that in tech there’s a joke that AI stands for ‘Actually Indian’, since so many ‘innovations’ turn out to be Indian IT workers pretending to be an automated solution. Well, funny story…
The Latin Times reported last week that Builder.ai, a tech startup once valued at $1.5 billion, “has filed for bankruptcy after it was revealed that its so-called artificial intelligence was actually hundreds of human workers in India pretending to be chatbots.” The scandal is “one of the biggest failures of the post-ChatGPT AI investment boom”.
The London-based startup had “raised over $445 million from big names including Microsoft and the Qatar Investment Authority” but a lender found it had also “inflated its 2024 revenue projections by 300%” and it subsequently emerged that “nearly 700 engineers in India were manually coding customer requests”.
“The company now owes millions to Amazon and Microsoft in cloud computing costs and has laid off around 1,000 employees,” reported the Latin Times.
In other words, it’s a story as old as the hills. A scam is a crime, but Scam.ai is “an opportunity.”
Spy High: Prime Video recently released a new documentary series called Spy High, about the 2010 ‘WebcamGate’ scandal in a Philadelphia school district where high school students found out they had been spied upon using school-issued laptops. This happened fifteen years ago - obviously we don’t have those sorts of problems anymore.
Sordid Camera: Candid Camera was a 70s television show that spawned generations of imitations like Punk’d and its own 90s reincarnation hosted by the one and only Dom DeLuise. The audience laughs while some unsuspecting individual reacts to something unexpected or ridiculous without knowing they’re being filmed. All in good fun.
You’re in focus, it’s your lucky day
Smile! You’re on candid camera.
The 21st century equivalent is Sordid Camera, in which ‘upskirting’ has to be outlawed because it is so common, dicks pics are de rigeur on social media and dating apps, and, of course, micro-cameras enable a new cohort of voyeurs to creep out their tenants and guests.
The Guardian reports on the experience of a newly-divorced woman who was offered a place to stay for eleven months by a good Samaritan who, she later discovered, was not only recording her secretly with a hidden camera in the bathroom but “had been filming women through hidden cameras for almost 15 years, creating an extensive digital library neatly organised under their names.”
This was not an isolated incident either, merely one egregious enough to merit national coverage and a custodial sentence for the perv-etrator.
“Just this month, a doctor was jailed for filming guests at his Glasgow Airbnb for more than three years, through cameras in bedrooms and bathrooms disguised as air fresheners (one pointed at the loo, the other at the shower) and a smoke alarm. Other UK cases include photographer David Glover, who filmed more than 100 women using covert cameras in his studio changing rooms; Adam Devaney, who used a camera disguised as a pen to film colleagues in the toilet of his North Yorkshire workplace, and Adam Dennis and Robert Morgan, who used hidden cameras to film more than 5,000 people in swimming pool changing rooms and toilets in London and Surrey, then shared and traded the images online.”
There was even an electrician who “had used hidden cameras to film women (some of them his customers) in their bedrooms, bathrooms and pub toilets for over a decade”, presumably installing them while doing routine electrical work.
The article also includes the story of ‘Linda’, who spent eight years in an abusive relationship with a man who tracked her relentlessly.
“At first, he was charming and attentive but the control started subtly,” she says. “First, he convinced me to delete my social media because it was ‘bad for my mental health’. Eventually, I stopped going out with my friends as it caused so many arguments, it was easier not to.” Isolated, home alone, she would sit down to watch TV and have a cup of tea and he would text to ask, “Enjoying your cup of tea?” If she went to answer the doorbell, he would message, “Who was at the door?”
“I was so confused,” says Linda. “It was like living in a television show – like it wasn’t actually my life. I tried sitting down at different times in case he was just guessing my routine but he always knew. He seemed to know my every move.”
The Guardian article does read a bit like a so-called ‘infotorial’, an ad masquerading as an editorial, in the sense that it is clearly setting out the case for banning certain types of cameras that can be misused in this way. Whether or not that is a necessary or reasonable step is up for debate, but Linda’s description of being watched constantly, and how it permeated her life with a sense of unreality, made me think of the ever-increasing amount of government surveillance to which we are subjected in the UK. Granted, when the government does it, they at least have the argument that they are acting on a mandate or in the public interest, but is it really that much creepier when a voyeur does it to get his rocks off1? That’s not a defence of the crime or the criminal, but a genuine question. Is it more worrying to be filmed involuntarily by the government or a fellow citizen?
Answers on a postcard or, failing that, in the comments.
Don’t Abort And Drive: Speaking of surveillance cameras, 404 Media report that “authorities in Texas performed a nationwide search of more than 83,000 automatic license plate reader (ALPR) cameras while looking for a woman who they said had a self-administered abortion, including cameras in states where abortion is legal such as Washington and Illinois, according to multiple datasets”.
The technology is “made by a company called Flock and usually marketed to individual communities to stop carjackings or find missing people”, but law enforcement, while “required to provide a reason” for requesting access to their database, “generally do not require a warrant or any sort of court order”. The local sheriff engaged in “a nationwide search”, with “evidence of the search appear[ing] in logs held by different police departments on the other side of the country from Texas”.
Sheriff Adam King of Johnson County argued that “her family was worried that she was going to bleed to death [after a self-administered abortion], and we were trying to find her to get her to a hospital”.
Ashley Emery, senior policy analyst in reproductive health and rights at the National Partnership for Women & Families, told 404 Media “The risks of this intrusive government monitoring cannot be overstated: law enforcement could deploy this surveillance technology to target and try to build cases against pregnant people who travel for abortion care and those who help them. This incident is undeniably a harbinger of more AI-enabled reproductive surveillance and investigations to come.”
Uber’s Robo-Taxis: Some exciting news for the already-suicidal pedestrians of London. BBC News reports that “Uber will trial robotaxis - autonomous cars with no human safety driver at the wheel - in London next spring.” The British government is “introducing an accelerated framework for small autonomous "bus and taxi like" commercial services to get them underway earlier,” although it is unclear how Uber qualifies as a “small” service. In one of those statements that gets printed unquestioningly and rarely re-examined after the fact, “[t]he Department for Transport says the industry could create 38,000 jobs and add £42bn to the UK economy by 2035.” How is replacing Uber drivers with unmanned vehicles going to create jobs? Are those jobs going to pat £110,000 per person per year? How else do they expect to rake in £4.2 billion per year by making human labour obsolete?
New York Protects Kids With AI: The Markup reports that New York City’s Administration for Children’s Services (ACS) has been using an “AI-powered system” they refer to as a “child safety algorithm” to predict and rank the risk the city’s families present to their children. The system was “developed by an internal data team at the agency, the Office of Research Analytics” and “uses 279 variables to score families for risk, based on cases from 2013 and 2014 that ended in a child being severely harmed”.
Some factors might be expected, like past involvement with ACS. Other factors used by the algorithm are largely out of a caretaker’s control, and align closely with socioeconomic status. The neighborhood that a family lives in contributes to their score, and so does the mother’s age. The algorithm also factors in how many siblings a child under investigation has, as well as their ages. A caretaker’s physical and mental health contributes to the score, too.
It was also a secret.
Even those who work closely on child welfare issues in the city often don’t know ACS’ algorithm exists: several lawyers, advocates, and parents learned about it for the first time from The Markup, and those who did know about it were unaware of the factors that contribute to a score.
There is also a racial component.
Black families are more likely to have child welfare services called to investigate them, for example, and calls to ACS contribute to a family’s score. The algorithm, according to a report obtained by The Markup, does not explicitly use race to score families, but uses “variables that may act as partial proxies for race (e.g., geography)” to make its decisions, including a family’s county (or borough), zoning area, and community district.
Speaking of which…
FaRT: MyLondon reports that “[a]n anti-knife crime activist has launched a legal challenge against the Metropolitan Police after he was threatened with arrest due to a mistake by live facial recognition technology.”
Shaun Thompson “was wrongly flagged as a suspect on the Met's facial recognition database”. He says he “was given a nudge on the shoulder after walking past a white van, then held by police for almost half-an-hour…[and] only let go after handing over a copy of his passport.”
The incident, which took place in 2024, was a shock for Thompson, who volunteers trying to prevent youth crime.
"They were telling me I was a wanted man, trying to get my fingerprints and trying to scare me with arrest, even though I knew and they knew the computer had got it wrong. Instead of working to get knives off the streets like I do, they were wasting their time with technology when they knew it had made a mistake," said Mr Thompson.
"I was angry that I had been stopped by a flawed technology that misidentified me as someone else and was treated as though I was guilty. I'm bringing this legal challenge because I don't want this to happen to other people. Facial recognition is like stop and search on steroids and doesn't make communities any safer. It needs to be stopped."
Silkie Carlo, the director of Big Brother Watch, commented on the case:
"Shaun's story is proof that facial recognition surveillance poses a real threat to the public's rights and should be urgently banned. It's vital we roll back facial recognition and defend people's rights with this groundbreaking legal action."
Big Brother Watch are crowdfunding for the case.
Airline Intel: Wired reported this week that “a data broker owned by the country’s major airlines, including Delta, American Airlines, and United, collected US travelers’ domestic flight records, sold access to them to Customs and Border Protection (CBP), and then as part of the contract told CBP to not reveal where the data came from, according to internal CBP documents obtained by 404 Media.”
The Airlines Reporting Corporation (ARC) “is owned and operated by at least eight major US airlines” and “[m]ore than 240 airlines depend on ARC for ticket settlement services”.
“The sale of US fliers’ travel information to the government is part of ARC’s Travel Intelligence Program (TIP).”
The amount of data being hoovered up is stunning, making the contractually-requested secrecy even more grotesque.
The new documents obtained by 404 Media also show ARC asking CBP to “not publicly identify vendor, or its employees, individually or collectively, as the source of the Reports unless the Customer is compelled to do so by a valid court order or subpoena and gives ARC immediate notice of same.”
The Statement of Work says that TIP can show a person’s paid intent to travel and tickets purchased through travel agencies in the US and its territories. The data from the Travel Intelligence Program (TIP) will provide “visibility on a subject’s or person of interest’s domestic air travel ticketing information as well as tickets acquired through travel agencies in the U.S. and its territories,” the documents say. They add that this data will be “crucial” in both administrative and criminal cases.
A DHS Privacy Impact Assessment (PIA) available online says that TIP data is updated daily with the previous day’s ticket sales, and contains more than one billion records spanning 39 months of past and future travel. The document says TIP can be searched by name, credit card, or airline, but ARC contains data from ARC-accredited travel agencies, such as Expedia, and not flights booked directly with an airline. “If the passenger buys a ticket directly from the airline, then the search done by ICE will not show up in an ARC report,” that PIA says. The PIA notes that the data impacts both US and non-US persons, meaning it does include information on US citizens.
It has become disturbingly common for organs of the American state to access otherwise-legally-inaccessible data by purchasing it from the private sector. As Wired puts it:
US law enforcement agencies have repeatedly turned to private companies to buy data rather than obtain it through legal processes such as search warrants or subpoenas. That includes location data harvested from smartphones, utility data, and internet backbone data.
404 Media contacted Jake Laperruque, deputy director of the Center for Democracy & Technology's Security and Surveillance Project, who told them that “the government seems intent on using data brokers to buy their way around important guardrails and limits.”
“It's clear the data broker loophole is pushing the government back towards a pernicious ‘collect it all’ mentality, gobbling up as much sensitive data as it can about all Americans by default. A decade ago the public rejected that approach, and Congress passed surveillance reform legislation that banned domestic bulk collection. Clearly it's time for Congress to step in again, and stop the data broker loophole from being used to circumvent that ban,” he added.
We Was Robbed: The Guardian reported last week that “[t]he social media platform Reddit has sued the artificial intelligence company Anthropic, alleging that it is illegally “scraping” the comments of Reddit users to train its chatbot Claude.”
The Verge reported that Reddit claimed Anthropic “had accessed its platform more than 100,000 times since July 2024, after Anthropic allegedly said it had blocked its bots from doing so.”
The Verge went on:
Last August, three authors filed a class-action lawsuit in California federal court against Anthropic, alleging in a filing that the company had “built a multibillion-dollar business by stealing hundreds of thousands of copyrighted books.” And in October 2023, Universal Music sued Anthropic in a Tennessee federal court over “systematic and widespread infringement of their copyrighted song lyrics.”
It’s part of an increasing trend of publishers and content creators suing AI companies over alleged copyright infringement. OpenAI, creator of ChatGPT, has been a key part of that conversation, following a high-profile lawsuit from The New York Times, a class-action lawsuit from a group of authors including George R.R. Martin, and a lawsuit from the publishers of newspapers including The New York Daily News and The Chicago Tribune. And a group of publishers including Condé Nast and Verge parent company Vox Media has sued AI company Cohere for infringement.
The Guardian pointed out that Reddit “previously entered licensing agreements with Google, OpenAI and other companies to enable them to train their AI systems”, so Anthropic’s alleged refusal to do things legally is the issue, rather than any knee-jerk resentment to AI training.
AI-Native Universities: Speaking of training, and hot on the heels of my conversation with Professor Garret Merriam at CSU Sacramento in Episode 139, the New York Times reports that OpenAI, the company behind ChatGPT, “has a plan to overhaul college education — by embedding its artificial intelligence tools in every facet of campus life.”
If the company’s strategy succeeds, universities would give students A.I. assistants to help guide and tutor them from orientation day through graduation. Professors would provide customized A.I. study bots for each class. Career services would offer recruiter chatbots for students to practice job interviews. And undergrads could turn on a chatbot’s voice mode to be quizzed aloud ahead of a test.
OpenAI dubs its sales pitch “A.I.-native universities.”
It’s all happening a bit fast for anyone who employs the ‘precautionary principle’:
OpenAI’s push to A.I.-ify college education amounts to a national experiment on millions of students. The use of these chatbots in schools is so new that their potential long-term educational benefits, and possible side effects, are not yet established.
A few early studies have found that outsourcing tasks like research and writing to chatbots can diminish skills like critical thinking. And some critics argue that colleges going all-in on chatbots are glossing over issues like societal risks, A.I. labor exploitation and environmental costs.
Not to mention the lying. And the lying. And the LYING.
Or the threats, like when Anthropic’s “new system revealed it is sometimes willing to pursue "extremely harmful actions" such as attempting to blackmail engineers who say they will remove it.”
CSU, where Episode 139 guest Garret Merriam teaches, “announced this year that it was making ChatGPT available to more than 460,000 students across its 23 campuses to help prepare them for "California’s future A.I.-driven economy."”
Using the phrase “A.I.-empowered university system” does not convince your humble correspondent that AI will really “empower” the students whose livelihoods it may replace by 2040, according to researchers at Oak Ridge National Laboratory.
That’s it for this week’s Weird, everyone. As always, I hope you enjoyed it.
Outro music is Where Do We Go From Here by the exceptional and sadly-departed Charles Bradley.
Stay sane, friends.
We gotta all make that change
My brothers and my sisters
It's time to make this world a brighter place
For the generation to come
Yes, “his”. According to the Guardian, “there are no women serving time for [voyeurism]”.
Oh the overreach of AI, camera-enabled peeping Tom’s, and a whole bunch of lying liar-holes! It was a compendium of bad actors which is unfortunately par for this awful course we find ourselves. Of course, for me, AI is the new roots of evil that we’ll never get back in the box. 🫣 Yes, it was a very weird week, especially when someone would deign to disparage 1984 which has turned into literal prophecy.
Cheers!
Reading Shaun's facial misrecognition story put me in mind of Terry Gillian’s Brazil, his excellent dystopian movie from 1985.
You could say it was pretty ahead of its time