The Weekly Weird #26
Spot the human, eyePhone, EU and Indonesia bring in digital ID, get a second opinion, EU's Lovejoy Law: Chat Control, total Recall revisited
Welcome back to our deep dive into dystopian developments!
If you haven’t already, make sure you check out the latest episode of the podcast with sociologist and author Dr Ashley Frawley. We explore her idea of the ‘constant crisis’ society, and her belief that our culture has a “tendency to communicate social problems as emotional problems.”
Speaking of problems, this week has been a veritable tsunami of weirdness, so let’s crack on!
Spot The Human
If it was a joke, it would probably start: “Aristotle, Mozart, Leonardo da Vinci, Cleopatra, and Genghis Khan are sitting in a train compartment.”
Tore Knabe, “a freelance virtual reality and augmented reality developer in Berlin,” has created a virtual reality (VR) video of what he calls a Reverse Turing Test.
In a Turing Test, a human asks a computer questions to see if it can convincingly pretend to be human. For the reverse, a group of AI non-player characters (NPCs) in a VR environment built using the Unity game engine are tasked with identifying which one of them is a human. The result is a haunting and tense video that casts a convincing shadow on the future of human/computer interactions.
One commenter on X responded to the video simply:
For me, the setting in a mid-century train compartment brought to mind the novel The Passenger by Ulrich Boschwitz. Written in 1938, just after Kristallnacht, the book captures the feeling of pervasive anti-Semitism in Nazi Germany and conjures, especially in the scenes set on trains, the horrific reality of being a “hunted civilian.”
Having that novel on my mind as a backdrop for this video made it creepier than it perhaps is or was intended to be. That said, you can see for yourself how the AIs quickly come up with a fast way of spotting the human, who, to be fair, doesn’t make it hard for them.
It is genuinely chilling when Cleopatra looks at Aristotle and says “Who among us do you believe to be the human interloper?” The four AIs spot the human with 75% accuracy, and their reasoning is sound: The human is the one reacting off-the-cuff, without the extensive processing power, total recall, and access to reams of input material on the subject they are pretending to be.
Is it easier to tell a human from an AI than an AI from a human? With the steady drumbeat of digital identity verification and information sanitation initiatives worldwide, is it really farfetched to imagine a near-future in which humans are subjected to machine interrogation to prove their humanity online?
We accepted Captcha, and other methods for weeding out automation and bots. As AI and the bots that use LLMs get more sophisticated, and especially knowing our all-too-human tendency to take the path of least resistance, it’s a troubling possibility that the digital environment will become increasingly inhospitable to humans, since it will be built around making humans prove they aren’t machines, a task that is both tiresome and demeaning.
Just as a surveillance system can end up treating everyone as a criminal, which is an insult and an injury to our social fabric, what will happen if cyberspace treats everyone as a machine?
eyePhone
Apple have announced that Eye Tracking1, which first showed up in their Apple Vision Pro headset, is being rolled out to their handheld devices.
Powered by artificial intelligence, Eye Tracking gives users a built-in option for navigating iPad and iPhone with just their eyes. Designed for users with physical disabilities, Eye Tracking uses the front-facing camera to set up and calibrate in seconds, and with on-device machine learning, all data used to set up and control this feature is kept securely on device, and isn’t shared with Apple.
Eye Tracking works across iPadOS and iOS apps, and doesn’t require additional hardware or accessories. With Eye Tracking, users can navigate through the elements of an app and use Dwell Control to activate each element, accessing additional functions such as physical buttons, swipes, and other gestures solely with their eyes.
While the new tracking function is obviously a good thing for people with physical disabilities, a certain reflexive suspicion kicks in when a major tech company creates a new way to watch you watching whatever you’re watching.
Are we in Orwellian telescreen territory? Or have we been there for a while now?
A 2010 paper called A gaze tracking method as an IPTV interface detailed how a wearable camera could track the wearer’s eyes and predict the trajectory of their gaze. The BBC wrote about the launch of an eye-controlled television in 2012. In 2015, the Daily Mail noted a new Samsung patent on a way to track viewers’ eyes, detect their facial expressions to determine if they are struggling to see action or read subtitles, and automatically zoom in or increase text size accordingly on-screen. A 2017 thread on Reddit discussed the coming inclusion of “under-display” cameras for tracking viewer attention, mostly for the benefit of advertisers who want to know whether anyone actually watches their commercials.
The original post:
I met a guy at a party the other day who works for a big company and tries to optimize advertisements to get the best response. There's a company that has been talking about putting eye-tracking cameras behind the TV screen so you don't know it's there and sending eye-tracking telemetry back through the internet to advertisers. Apparently they work already and they're trying to pay TV manufacturers to include them.
Never let your TV get online, folks!
I'm worried about the near future where small and cheap cell-phone chips will be in every TV for those of us who don't put them on our network.
Orwell came up immediately:
A side-thread developed in which Redditors discuss whether Orwell included any insight into whether the citizens of Airstrip One had to actually buy the telescreen they were required to have, or if it was government-issue.
The best response:
I'm pretty sure that Orwell doesn't get into telescreen financing, but I have a feeling that if he did, it would be something like, "there was no law requiring Inner Party members to purchase a telescreen, but all members were aware that not having one was flagged as eccentric behavior that could be used against the individual at any moment. An individual who opted to not purchase a telescreen might find himself explaining that choice in some office deep inside Miniluv."
Here we hit on something that comes up a lot in discussions around dystopian technology. Which is more suspicious to the authorities, someone who has a smartphone like everyone else or someone who refuses to use one? Will resistance to owning or using certain devices become a red flag? Could not owning an Alexa or an Echo become Gen-Z’s equivalent of buying Catcher In The Rye?
TechRadar reported in 2021 on the future of OLED TVs, quoting the CEO of OTI Lumionics, Michael Hellander (emphasis mine):
“In mobile phones we're starting to see the first generation of products with under-display cameras, directly integrated into the display and kind of hidden under active pixels. We’re anticipating some future products with under-display Face ID and other types of IR sensors too.”
When will the public’s gag reflex kick in? Unfortunately, that seems unlikely.
In 2023, The Cato Institute published the results of a survey which asked respondents whether they “favor or oppose the government installing surveillance cameras in every household to reduce domestic violence, abuse, and other illegal activity.”
From a sample size of 2,000 people, the response was somewhat reassuring.
To me, less than 100% opposition is worrying, but at least we got 75%. However, although 75% were opposed overall, dividing the responses into age groups revealed a rather troubling trend.
Nearly a third of people under the age of 30 were fully behind mandatory in-home government surveillance, as were one-fifth of people in the 30 - 44 age bracket.
The percentages in the other age brackets are stable at 5 - 6%, which implies that something ongoing has been happening with younger people to continually increase their support for being watched.
The Cato survey was actually aimed at understanding attitudes towards a central bank digital currency (CBDC) in the United States, and, perhaps unsurprisingly, 53% of those who favoured in-home government surveillance cameras also were cool with a trackable, editable, deletable CBDC.
What will civil liberties look like in a country ruled by a cohort that actively favours intrusive surveillance at a significantly higher rate than the current ruling age group?
Now that cameras can be “hidden under active pixels,” and algorithms can identify your face and track what you look at, are we really so far from a world of telescreens?
At least the citizens of Airstrip One were told openly that Big Brother was watching.
EU and Indonesia Bring In Digital ID
Biometric Update reported this week that the EU has officially brought into effect their legal framework for a roll-out of digital ID, “meaning that Member States will be required to offer at least one EUDI Wallet to their citizens and residents by 2026.”
While violently abusing the meaning of the word “allow”, the article explains the mandatory digital ID legislation:
The mobile app wallet will allow users to identify themselves to public and private services and store, share and sign digital documents. The regulation mandates Member States to offer citizens and businesses digital wallets. Public authorities or private entities may issue the wallets and users will be able to link with national digital identities as well as store personal attributes such as driving licenses, diplomas and bank accounts.
Europe’s task is to achieve widespread acceptance of digital identities throughout the EU by harmonizing regulation and coming up with a common technical architecture, reference framework and standards.
In response, Belgium and Italy have both launched digital wallets.
European Commissioner for Internal Market Thierry Breton claimed that “this innovative solution ensures that our personal data remains safeguarded as we navigate the digital world.”
Pushing people to put their personal data online in the hands of private and public sector providers renowned for hacks, leaks, abuses, failures, and poor security protocols is hardly a masterclass in safeguarding.
Not to be outdone, Indonesia has launched a government platform called INA Digital, which will be the basis of a digital ID system.
From Biometric Update:
The INA Digital platform will accommodate public applications for digital ID, as well as health services, education, social aid, police permit for public assembly, and driver’s licenses in a single portal.
Just in case you’re wondering whether this is a good thing that will drive efficiency, or a creepy development in a dystopian direction, this little tidbit of information may help clear that up:
…the Indonesian president requested help from former UK prime minister Tony Blair to further its digital transformation campaign.
Is that the Tony Blair who inspired the documentary Taking Liberties? Or this one2?
Oh, same guy. My bad. I’m sure the people of Indonesia are in good hands.
Get A Second Opinion
Aaron Sibarium at the Washington Free Beacon published an exposé detailing an admissions standards debacle at UCLA Medical School that has turned it into “a failed medical school,” according to one of his sources.
Race-based university admissions have been illegal since 1996 in the state of California, where UCLA is located, but Sibarium’s article explains how the new dean of admissions, Jennifer Lucero, has presided over the school’s breaking circumvention of the law.
Put plainly:
Race-based admissions have turned UCLA into a "failed medical school," said one former member of the admissions staff. "We want racial diversity so badly, we're willing to cut corners to get it."
[…]
It is almost unheard of for admissions officials to go public, even anonymously, and provide a window into confidential deliberations, much less to accuse their colleagues of breaking the law or lowering standards. They've agreed to come forward anyway, several officials told the Free Beacon, because the results of Lucero's push for diversity have been so alarming.
"I wouldn't normally talk to a reporter," a UCLA faculty member said. "But there's no way to stop this without embarrassing the medical school."
How serious is the situation, if the admissions staff are ratting out their own boss?
Within three years of Lucero's hiring in 2020, UCLA dropped from 6th to 18th place in U.S. News & World Report's rankings for medical research. And in some of the cohorts she admitted, more than 50 percent of students failed standardized tests on emergency medicine, family medicine, internal medicine, and pediatrics.
The result of Lucero’s policy on admissions is that, according to a professor, “a third to a half of the medical school is incredibly unqualified.”
Sibarium’s article is worth reading in its entirety to really absorb what has been going on there. In the meantime, if your doctor graduated from UCLA Med recently, you might want to get a second opinion.
EU’s Lovejoy Law: Chat Control
The EU’s attempt to monitor all private communications “to protect children,” thought to be defunct after member states failed to agree on its scope and specifics, is back from the dead.
Why am I calling it a Lovejoy Law?
From Matthew Green, a cryptography teacher at John Hopkins, on X:
Belgium has resurrected it in a “compromise” and many EU member states are positive. There’s a real chance this becomes law.
The stated goal is a crackdown on child sexual abuse material (CSAM), but the means to that end will be “upload moderation.” Your chat app will be actively monitored by the provider, and AI will decide what is acceptable and what isn’t, resulting in automated flagging and blocking of content.
From the EU’s proposal:
The detection should be carried out using “upload moderation”, which is linked to the user’s consent. If a user refuses consent, he can continue to use a service but cannot upload images and videos.
A ‘two-strikes’ system will be used to minimise the impact of false positives, and before human review, “pseudonymization will take place as a further protective measure.”
How reassuring.
So why is there disagreement between member states? Are some of them favouring liberty and the presumption of innocence to protect their citizens from surveillance?
Don’t be silly.
Ireland objected because text and audio would be excluded from the automated surveillance dragnet. France supports the proposal in principle but wanted to know how user consent would be binding. Spain wanted to be sure that images embedded in Word or Powerpoint documents would not get missed. Denmark “viewed the limitation of grooming detection to images and video material as inadequate.”
More discussion of the proposal is due in June.
Total Recall Revisited
In last week’s Weird, we covered Microsoft’s announcement that it will be embedding an AI within its operating system, one of the functions of which will be Recall, a continuous process of snapshotting your screen to allow context search.
For example, you remember you saw a cool pair of jeans on a website but don’t have the page bookmarked, so you search for “jeans on website” and find the page using Recall.
In exchange for this minor increase in utility, Microsoft are offering you a total loss of privacy and security.
In Microsoft’s own announcement, there was this little horror-nugget:
Note that Recall does not perform content moderation. It will not hide information such as passwords or financial account numbers. That data may be in snapshots that are stored on your device, especially when sites do not follow standard internet protocols like cloaking password entry.
From Kevin Beaumont at Double Pulsar, who says that “Microsoft are inventing a new security nightmare”:
For example, if you log into online banking, your information around account numbers, balances, purchases etc will enter Recall’s database.
This fundamentally changes the relationship with you and your Microsoft Windows computer. It also introduces real risk to you, the customer.
But Microsoft says that all the data will be encrypted and stored locally on your device. What could go wrong?
More from Kevin Beaumont:
If you look at what has happened historically with infostealer malware — malicious software snuck onto PCs — it has pivoted to automatically steal browser passwords stored locally. In other words, if a malicious threat actor gains access to a system, they already steal important databases stored locally.
They can just extend this to steal information recorded by Copilot’s Recall feature.
Microsoft have said that the default allocation of space for Recall on a device with 256GB of memory would be 25GB, around 10% of the hard drive, “which can store approximately three months of snapshots.” If the allocation scales with the size of the hard drive, a 1TB hard drive could hold a year’s worth of snapshots, giving a potential hacker a treasure trove of compromising material. Even three months of Recall in the wrong hands is enough to screw you over good and proper.
Malwarebytes point out some additional reasons why this is a terrible idea.
The consequences of such a system could be enormous.
With Recall, a CEO’s personal laptop could become an even more enticing target for hackers equipped with infostealers, a journalist’s protected sources could be within closer grasp of an oppressive government that isn’t afraid to target dissidents with malware, and entire identities could be abused and impersonated by a separate device user.
In fact, Recall seems to only work best in a one-device-per-person world. Though Microsoft explained that its Copilot+ PCs will only record Recall snapshots to specific device accounts, plenty of people share devices and accounts. For the domestic abuse survivor who is forced to share an account with their abuser, for the victim of theft who—like many people—used a weak device passcode that can easily be cracked, and for the teenager who questions their identity on the family computer, Recall could be more of a burden than a benefit.
Malwarebytes quotes their General Manager of Consumer Business, Mark Beare:
“With AI there will be a strong pull to put your full self into a model (so it knows you),” Beare said. “I don’t think it’s easy to understand all the negative aspects of what can happen from doing that and how bad actors can benefit.”
Beaumont sums it up:
Microsoft will wordsmith around this and issue lovely comments to press, but this seems like a huge risk which fundamentally undermines the security of the Windows ecosystem. All consumers and businesses should disable and reject Recall on Copilot+ PCs unless it has a substantial rework, as it directly places you and the data you view at risk.
Nice one, Microsoft.
As AI is integrated and enabled by default across devices and operating systems, will personal computers eventually become essentially unusable due to the level of surveillance and risk built into them?
In light of the ending of the eyePhone segment above, where are we headed if the only way to preserve a meaningful degree of privacy is to eschew the use of a personal computer, a smartphone, a tablet, a smart TV, but doing so gets you flagged as a suspicious person because only someone up to no good would want to avoid the State’s All-Seeing Eye?
Nothing to hide, nothing to fear…right?
That’s it for this week’s Weird, everyone. Thanks for reading, as always. Drop a comment down below, let me know what you think.
In honour (and in aid) of recent graduates from UCLA Medical School, outro music is Dry Bones by the Delta Rhythm Boys.
“Your neck bone’s connected from your shoulder bone…”
Yes, I know the picture is a Photoshop job, but it is brilliant and, in its essence, truthful.