The Weekly Weird #69
Air fryer spier, hijab watch, foods of colour, AI-mazon, 23AndHawley, Coming Soon, Britain's database state, Minnesota not-so-nice, au revoir to encryption, the end of history, privacy or monopoly?
Holy moly and howdy ho! Welcome back to another edition of the Weekly Weird, where you’ll find more dystopian doings than you can shake an RFID-tagged smart stick at.
How’s it all been going out there in CrazyLand? Answers on a postcard or in the comments, we want to know.
For this lady in China, not so great. She was forced by staff at an airport in Shanghai to remove her makeup because it was preventing their facial recognition system from scanning her.
Anyway, smoke ‘em if you’ve got ‘em, let’s do this!
AI-mazon
AI is making leaps and bounds, according to Amazon CEO Andy Jassey. In a letter to Amazon employees this week, he laid out how the Everything Company is “using Generative AI to make customers lives better and easier” and “using Generative AI broadly across our internal operations.”
“We [h]ave strong conviction that AI agents will change how we all work and live”, he wrote, before explaining the extent of the capability and future adoption of AI agents.
Think of agents as software systems that use AI to perform tasks on behalf of users or other systems. Agents let you tell them what you want (often in natural language), and do things like scour the web (and various data sources) and summarize results, engage in deep research, write code, find anomalies, highlight interesting insights, translate language and code into other variants, and automate a lot of tasks that consume our time. There will be billions of these agents, across every company and in every imaginable field. There will also be agents that routinely do things for you outside of work, from shopping to travel to daily chores and tasks. Many of these agents have yet to be built, but make no mistake, they’re coming, and coming fast.
“Today, we have over 1,000 Generative AI services and applications in progress or built, but at our scale, that’s a small fraction of what we will ultimately build,” he continued, before slipping in the warning shot.
As we roll out more Generative AI and agents, it should change the way our work is done. We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs. It’s hard to know exactly where this nets out over time, but in the next few years, we expect that this will reduce our total corporate workforce as we get efficiency gains from using AI extensively across the company.
If the corporate double-talk makes that paragraph tricky, yes, Jassy is saying that AI will take jobs from humans over time and Amazon will have fewer net employees in the future as a result.
Adding insult to injury, Jassy then tells the Amazon workforce to “be curious about AI, educate yourself, attend workshops and take trainings, use and experiment with AI whenever you can”.
You know that thing I just said will probably take your job? You should help speed up the process of it making you redundant. And no, Jan from Accounting did not eat your yoghurt. You know who doesn’t write their name on their sack lunch? AI.
For an extra frisson of weirdness, here’s the story as told in Naked News’s Bikini Report, which we all now know exists:
Who will Amazon’s AI agents and Microsoft sell stuff to if everyone’s unemployed?
Foods Of Colour
In a decision to dye for, General Mills, the elderly military persona turned cereal merchant, has announced that it “plans to remove certified colors from all its U.S. cereals and all K-12 school foods by summer 2026.”
MAHA be praised, say the avoiders of seed oils. Madness, cry the critics of RFK Jr.’s health regime.
Meanwhile, a milder response might be: If they can just take it out, why was that crap in the food to begin with?
The General wants you to know that “85 percent of General Mills’ full U.S. retail portfolio is currently made without certified colors.”
In the healthy future, all the Lucky Charms will be brown. Trix is now only for colour-blind kids.
Say thank you, America!
Air Fryer Spier
The Guardian reported this week that the Information Commissioner’s Office (ICO) in the UK has delivered its first guidance to “[m]akers of air fryers, smart speakers, fertility trackers and smart TVs”.
People have reported feeling powerless to control how data is gathered, used and shared in their own homes and on their bodies.
After reports of air fryers designed to listen in to their surroundings and public concerns that digitised devices collect an excessive amount of personal information, the data protection regulator has issued its first guidance on how people’s personal information should be handled.
Did you know that there are “smart fertility trackers that record the dates of their users’ periods and body temperature, send it back to the manufacturer’s servers and make an inference about fertile days based on this information”?
The ICO is dropping the hammer all over the internet of things:
Smart speakers that listen in not only to their owner but also to other members of their family and visitors to their home should be designed so users can configure product settings to minimise the personal information they collect.
The ICO’s press release on the new guidance cites a Which? investigation from November 2024 that found many egregious examples of intrusion, including the ‘air fryer spier’ details:
In the air fryer category, as well as knowing customers’ precise location, all three products wanted permission to record audio on the user’s phone, for no specified reason. The Xiaomi app linked to its air fryer connected to trackers from Facebook, Pangle (the ad network of TikTok for Business), and Chinese tech giant Tencent (depending on the location of the user). The Aigostar air fryer wanted to know gender and date of birth when setting up an owner account, again for no clear reason, but this was optional. The Aigostar and Xiaomi fryers both sent people’s personal data to servers in China, although this was flagged in the privacy notice.
Gender and date of birth? “Sorry, User, you must be at least sixteen to make sweet potato fries.”
Other findings by Which? included that “[s]mart TV menus are littered with ads and thirsty for user data” and “Samsung’s TV app requested eight risky phone permissions, including being able to see all the other apps on the phone, second only to the Huawei smartwatch.”
And of course, “[a]ll of the devices on test wanted to know users’ precise locations.”
Would a smart fridge in Iran or Saudi Arabia report you if you’re chilling a bottle of wine?
Speaking of Iran…
Hijab Watch
In March, the Guardian reported that Iran intensifies surveillance on women to enforce hijab law. If you’re unfamiliar, the hijab is one of the types of head coverings imposed upon worn by Muslim women.
One woman in the story “was sent an SMS message containing her car registration plate that stated the exact time and place that she had been recorded driving without her head properly covered.” She was told that a repeat offence would result in the car being “impounded”. Best not to think about what Iran’s Guidance Patrol (i.e. morality police) would do to her.
The UN called Iran’s ramped-up hijab enforcement “systematic repression”, reporting “increased use of technology and surveillance, including through State-sponsored vigilantism”.
The Iranian government also have an app called Nazer, “a state-backed reporting platform that allows citizens and police to report women for alleged violations.”
The Guardian reported:
The app is accessible only via Iran’s state-controlled National Information Network. Members of the public can apply to become “hijab monitors” to get the app and begin filing reports, which are then passed to the police.
According to the UN mission, the app has recently been expanded to allow users to upload the time, location and licence plate of a car in which a woman has been seen without a hijab.
It can also now be used to report women for hijab violations on public transport, in taxis and even in ambulances.
According to the UN report, aerial surveillance using drones has also been used at events such as the Tehran international book fair and on the island of Kish, a tourist destination, to identify women not complying with the hijab law.
The government has also increased online monitoring, blocking women’s Instagram accounts for non-compliance of hijab laws, and issuing warnings via text message. CCTV surveillance and facial-recognition technology has also been installed at universities.
In July 2024, “Arezoo Badri, a 31-year-old mother of two, was shot and paralysed when a police officer opened fire on her vehicle in Noor city, Mazandaran province, after her car was reportedly flagged for a hijab violation.”
FaRT Settlement
Biometric Update reported this week on an all-too-rare win for privacy:
Jefferson Parish Louisiana Sheriff Joe Lopinto’s office has agreed to pay $200,000 to settle a federal civil rights lawsuit brought by a Georgia man who was wrongly jailed for nearly a week after being misidentified by facial recognition technology.
“Randal “Quran” Reid, was arrested outside Atlanta, Georgia on the day after Thanksgiving in 2022...pulled over by police, informed that he was wanted for crimes in Louisiana, and taken into custody on a warrant.”
Despite having never been to the state of Louisiana, Reid “was held in jail for six days before the warrant was rescinded.”
After engaging in a fight to prove his innocence (and law enforcement negligence), it was discovered that “the sole basis for identifying Reid as a suspect was a facial recognition match obtained by Jefferson Parish Sheriff’s Office (JPSO) Detective Andrew Bartholomew using Clearview AI”, the by-now infamous tech company investigated by New York Times technology reporter Kashmir Hill in her excellent book Your Face Belongs To Us.
Listen to my conversation with Kashmir Hill in Episode 117:
Bartholomew’s arrest affidavit “did not disclose that the identification had been generated through facial recognition...[but]…simply cited “a credible source,” omitting any mention of the tool or its limitations.”
Once in the system, Reid found himself in procedural quicksand.
Bartholomew’s identification of Reid was then borrowed by a Baton Rouge detective who used it to secure yet another warrant against Reid for a similar theft in which thieves allegedly stole over $10,000 worth of Chanel and Louis Vuitton merchandise.
The settlement of $200,000 could mark a turning of the tide for the undisclosed use of facial recognition in American law enforcement.
“I’m definitely satisfied with the outcome,” Reid said. “I finally feel like I got some type of justice,” adding, “I’m not a person who likes or seeks the attention. Knowing I had to go through this for people coming after me is why I started it.”
23AndHawley
Josh Hawley, senior senator for Missouri and inspiration for the question “Why the long face?”, slapped around the CEO of the failed genetic data-hoovering operation 23AndMe at a Senate Judiciary Committee hearing on the sale of the company’s assets, which of course include DNA information on millions of people who never expected to see it auctioned off.
You can watch CEO Joseph Selsavage enter the Octagon here:
During the exchange, Selsavage agreed with Hawley that “genetic data is sensitive information”, but then they got into the fine print.
Hawley: So you're going to take 15 million Americans’ genetic information and you're going to sell it to somebody. And your message to us is today, trust us, it'll be fine. Maybe it's a big pharma company. Maybe we'll get lucky. Maybe they'll treat it right. I thought your privacy code, your privacy commitment said that consumers had a right not to have their information shared with anybody else without their consent.
Selsavage: Senator, that consent is, you know, essentially for, you know, not really for research purposes. And we are not selling it for research purposes.
Hawley: So when you tell the consumer, give us your personal information and we'll take money from you and we won't give it to anybody without your consent, it's not real. It just means, you know, maybe, kind of, depends on the day.
Selsavage: Senator, you know, I will say that our customers data is their own. They have the right at all times to access that information. They can edit it.
Hawley: We’re sure they can. But you're about to sell it to who knows who. They can't control it. You said to Senator Moody that consumers have complete control of their data. Complete. How can they have complete control if you're about to sell it without their consent?
Selsavage: They can delete that data any time up until the sale.
Hawley: Oh, okay, they can delete the data. Have you fixed the ability of customers to go on your website and delete it? Because right after you announced your sale, your deletion page went down. I hold in my hand here an article from the Wall Street Journal: “23AndMe site goes down as customers struggle to delete their data”. Can they even get on to your site to delete their data?
Someone call a doctor.
After reassuring Hawley that customers can in fact delete their data from the company’s website by choosing to delete their data on the Settings page, Hawley pulled out 23AndMe’s privacy policy.
Hawley: How do they know their date has been deleted?
Selsavage: Because we send them a notification that their information has been deleted…Our policy states that, you know, we will delete their data within 30 days. And in most cases, it is automatic and happens much more quickly.
Hawley: And when you’ve deleted it, it's deleted. It's gone forever.
Selsavage: All the genetic data is deleted forever.
Hawley: Really? Because that's not what your privacy statement says in the fine print. Let's read it. What your statement says is "we retain personal information for as long as necessary to provide the services and fill the transactions we have requested to comply with our legal obligations, resolve disputes, enforce agreements", etc., etc.. And then it goes on: “23AndMe and/or our contracted genotyping laboratory will retain your genetic information even if you choose to delete your account.”
Selsavage: Senator, 23AndMe does not retain any genetic information regarding the consumer once they delete their account.
Hawley: It says right here that you will retain genetic information, including date of birth and sex, even if you choose to delete your account. This is your privacy policy. I'm just quoting from it.
Selsavage: Senator, you know, to the best of my knowledge, we do not maintain any genetic information.
Hawley: It says even if you choose to delete your account, we will retain your genetic information, date of birth and sex, even if you choose to delete your account.
Hawley then delivered remarks that sum up the entire queasy nexus of data, privacy, private sector databases and tracking, and the long-unresolved issue of consumers accepting agreements they haven’t read in order to transact with companies who also probably haven’t read them either.
Here's my point. It's a pattern. Your consumers actually aren't in control of anything. You are. You control their data. You control their genetic information. Now you're about to sell it. You promised them “we won't ever sell it without your consent”. But you're doing it. You promised them “we’ll allow you to delete it”, but you don't. In fact, you've lied to them, haven’t you?
Selsavage offered up some epic weak tea.
“I assure you that we are deleting all of our customers who have…” he began, only for Hawley to cut him off (emphasis mine):
No, you're not. You're not because your policies say they're not and you're not deleting it, because if you were, your company wouldn't be worth $300 million.
[…]
And I tell you what, it's amazing to me. You're not getting your socks sued off by your customers. I hope they will. I hope they will rush to the courthouse even as we are here today, to sue you into oblivion, for lying to them and taking their most personal identifiable information and selling it for a profit and lying to them and to the American public.
[…]
What you're doing here has all kinds of implications, national security implications, all of it. But nothing is worse than taking the personal identifiable information of American consumers and keeping it and lying to them about it while you make a huge profit off of it. It's unbelievable to me. It's absolutely unbelievable. This concludes our hearing.
Hear hear.
Can we all stop pretending that these companies are selling their customers a service, please? It’s high time it was front and centre in these discussions that the customers are the product for a lot of tech companies, and those companies are selling them (in the form of their personal data) to anyone who puts in an attractive-enough bid.
Coming Soon
You know what’s sexy? Compliance.
Specifically, complying with Britain’s Online Safety Act, a highly problematic ‘Lovejoy Law’ that is in the process of pushing this sceptr’d isle towards digital ID, age verification, and all the trappings of a Benthamite surveillance/database state.
VerifyMy, one of the many age verification companies keen to have the law enforced good and hard for the benefit of their bottom line, just released a reading of the bill, presented by the porn star Ivy Maddox.
In their words:
To spur the industry into action, Ivy has created her most ‘explicit’ film yet – Coming Soon – a performance of the Online Safety Act to raise awareness of the regulation and the importance of protecting children from pornographic content online.
I never say no to a bit of regulatory action.
Hot.
Biometric Update reports that “mandatory implementation of “highly effective age assurance technology” for users accessing adult content on porn sites, search engines and user-to-user platforms such as social media or dating platforms” will be backed with the threat of fines that “can reach up to £18 million (US$24.3M) or 10 percent of global revenue, and Ofcom can fine or ban any platform found to be in violation.”
The threat is real: this month, Pornhub, Youporn and RedTube shut their doors to French users over the country’s age assurance legislation, and at least 17 U.S. states currently have laws that have caused Aylo, Pornhub’s parent company, to withdraw service.
Britain's Database State
Speaking of British laws that are making the UK a worse place, the Data (Use and Access) Bill received Royal Assent on Thursday and thereby passed into law.
Here’s the ‘Long title’ of the new law:
A bill to make provision about access to customer data and business data; to make provision about services consisting of the use of information to ascertain and verify facts about individuals; to make provision about the recording and sharing, and keeping of registers, of information relating to apparatus in streets; to make provision about the keeping and maintenance of registers of births and deaths; to make provision for the regulation of the processing of information relating to identified or identifiable living individuals; to make provision about privacy and electronic communications; to establish the Information Commission; to make provision about information standards for health and social care; to make provision about the grant of smart meter communication licences; to make provision about the disclosure of information to improve public service delivery; to make provision about the retention of information by providers of internet services in connection with investigations into child deaths; to make provision about providing information for purposes related to the carrying out of independent research into online safety matters; to make provision about the retention of biometric data; to make provision about services for the provision of electronic signatures, electronic seals and other trust services; to make provision about the creation and solicitation of purported intimate images and for connected purposes.
The full text of the bill can be read here.
It makes changes to the UK’s version of the EU’s GDPR data protection regulations, provides for ‘digital verification’ infrastructure, updates everything from the collecting and processing of data to the register of births and deaths, and, in certain places like the section about “purpose limitation”, appears to broaden the ways in which personal information can be gathered and processed.
Time will show how the law will be used, but in wording and intent, the outlook is ungood. The database state looms.
Minnesota Not-So-Nice
Last week, four people were shot, two fatally, allegedly by Vance Luther Boelter, “a security contractor and religious missionary who has worked in Africa and the Middle East” according to the BBC.
State representative Melissa Hortman and her husband, Mark, were shot and killed in their home…[and] State Senator John Hoffman and his wife, Yvette, were also shot multiple times and injured, but survived.
The horrific attacks have an additional dystopian tinge, however. Wired have reported that the alleged assassin “may have gotten their addresses or other personal details from online data broker services, according to court documents.”
According to an FBI affidavit, police searched the SUV believed to be the suspect's and found notebooks that included handwritten lists of “more than 45 Minnesota state and federal public officials, including Representative Hortman’s, whose home address was written next to her name.” According to the same affidavit, one notebook also listed 11 mainstream search platforms for finding people's home addresses and other personal information, like phone numbers and relatives.
The addresses for both lawmakers targeted on Saturday were readily available. Representative Hortman's campaign website listed her home address, while Senator Hoffman's appeared on his legislative webpage…
Senator Ron Wyden, often vocal on issues of privacy and surveillance, weighed in:
“The accused Minneapolis assassin allegedly used data brokers as a key part of his plot to track down and murder Democratic lawmakers,” Ron Wyden, the US senator from Oregon, tells WIRED. “Congress doesn't need any more proof that people are being killed based on data for sale to anyone with a credit card. Every single American's safety is at risk until Congress cracks down on this sleazy industry.”
The End Of History
Bill Wasik, writing in the New York Times last week, called out AI as being “poised to rewrite history”.
Among the many issues with AI and LLMs is the fact that, as one author who tried using models for historical research told Wasik, “it has no bullshit detector.”
In May of this year, The Times published bracing numbers about how, inexplicably, for all the strides in capability that L.L.M.s were otherwise making, their hallucination problem was getting worse: For example, on a benchmark test, OpenAI’s new o3 “reasoning” model delivered inaccuracies 33 percent of the time, more than twice the rate of its predecessor. To Johnson and his team at Google, the persistence of this problem validates the approach of NotebookLM: While the tool does occasionally misrepresent what’s in its sources (and passes along errors from those sources without much ability to fact-check them), constraining the research material does seem to cut down on the types of whole-cloth fabrications that still emerge from the major chatbots.
Historians using AI for non-fiction research and possibly even for drafting the next generation of history books raises all kinds of questions. Wasik intelligently asks whether the shuffling of information into new orders and new juxtapositions is really that different from advances of the past.
The printed index in books, a device dating back at least to the year 1467, allowed scholars to find relevant material without reading each tome in full. From the perspective of human knowledge, was that a step toward utopia or dystopia? Even now, 558 years later, who’s to say? Innovations that cultivate serendipity — such as the Dewey Decimal System, by whose graces a trip into the stacks for one book often leads to a different, more salient discovery — must, almost by definition, be plagued by arbitrariness. Classify a book about the Mariposa Battalion with Brands’s “The Age of Gold” and other gold-rush titles (979.404), and it will acquire a very different set of neighbors than if it’s classified as a book about the Battalion’s victims (“Native populations, multiple tribes,” 973.0497).
Wasik quotes historian Lara Putnam on the significance of how information is found:
“For the first time, historians can find without knowing where to look,” she wrote, in a particularly trenchant paragraph. “Technology has exploded the scope and speed of discovery. But our ability to read accurately the sources we find, and evaluate their significance, cannot magically accelerate apace. The more far-flung the locales linked through our discoveries, the less consistent our contextual knowledge. The place-specific learning that historical research in a predigital world required is no longer baked into the process. We make rookie mistakes.”
Another factor is the economist William Baumol’s cost disease, namely that “when technology makes certain workers more efficient, it winds up making other forms of labor more expensive and therefore harder to justify.”
As anyone who has noted the shift of newsroom funding away from investigative journalism can attest to, a history discipline that can do things fast and cheap with AI is logically less likely to be willing to spend money on more detailed, on-site research that may not bear fruit.
Why spend a month camped out in some dusty repository, not knowing for sure that anything publishable will even turn up, when instead you can follow real, powerful intellectual trails through the seeming infinitude of sources accessible from the comfort of home?
One prospect floated by one of the authors Wasik spoke to is the idea of a book coming with the sources and an AI agent to interpret them, allowing readers to construct and reformulate the narrative and approach the underlying information from other angles not in the original book.
It is perhaps the most brain-breaking vision of A.I. history, in which an intelligent agent helps you write a book about the past and then stays attached to that book into the indefinite future, forever helping your audience to interpret it. From the perspective of human knowledge, is that utopia or dystopia? Who’s to say?
Since it became possible to record and mix music economically at home, musicians have experimented with releasing tracks or even albums as ‘stems’, individual audio files that listeners can then remix for themselves. So in a certain way, the idea is not that new or shocking. That said, generative AI is now challenging the making of music itself, so whether the music industry will weather the storm or not remains to be seen.
Going back to history, Google recently released Veo3, their generative video AI model, which has already helped create a funny video with a great central conceit: What if historical figures were modern social media influencers?
#UnplannedButBlessed
Au Revoir To Encryption
“Client-side scanning” has been the angle from which the EU has sought to undermine end-to-end encryption in messaging and file sharing. In non-technical terms, the EU’s proposals gravitate towards your device checking the material you have on it before it gets encrypted when sent using messaging apps like Signal or WhatsApp, privacy be damned.
A new joint letter from 90 organisations has called on the EU to back away from encryption in its regulatory efforts, saying they are “deeply concerned by the Commission’s continued focus on identifying ways to weaken or circumvent encryption”.
Past and ongoing efforts in the European Union to grant law enforcement access to encrypted data have primarily focused on client-side scanning, a technology that circumvents encryption by scanning user devices before the encryption mechanism starts. Scanning not only violates the promises of end-to-end encryption but also creates vulnerabilities that could be exploited by criminals and hostile state actors. There is widespread consensus among technical experts that encryption circumvention tools create new risks that threaten national security, concerns recently echoed by member state authorities in Sweden and the Netherlands. The European Court of Human Rights and European Union Agency for Fundamental Rights have emphasized that statutory requirements that “weaken the encryption mechanism for all users” would be disproportionate under the Charter of the Fundamental Rights of the EU.
Privacy Or Monopoly?
“Side-loading” is when a user installs an app on their device directly, without using an app store like Apple or Google Play.
The EU’s Digital Markets Act contains provisions that mandate side-loading, as it circumvents the monopoly imposed by companies like Apple and Google when it comes to the cost and availability of apps. Apple famously extracts a 30% tax on proceeds from app sales on its app store, a clear indicator of the possession of a monopoly (i.e. the capacity to tax).
The Electronic Frontier Foundation (EFF) has more:
All this means that every euro a European Patreon user sends to a performer or artist takes a round-trip through Cupertino, California, and comes back 30 cents lighter. Same goes for other money sent to major newspapers, big games, or large service providers. Meanwhile, the actual cost of processing a payment in the EU is less than one percent, meaning that Apple is taking in a 3,000 percent margin on its EU payments.
To make things worse, Apple uses “digital rights management” to lock iPhones and iPads to its official App Store. That means that Europeans can’t escape Apple’s 30 percent “app tax” by installing apps from a store with fairer payment policies.
Now Australia is getting in on the act and considering a proposal that would force Apple to permit users to install apps from other sources. In response, Apple has predictably claimed this is a terrible idea, since it will derail their 30% closed-ecosystem gravy train.
The company's core argument is that the changes mandated by the EU's DMA, which came into full effect in March 2024, introduce serious security and privacy risks for users. Apple claims that allowing sideloading and alternative app stores effectively opens the door for malware, fraud, scams, and other harmful content.
Apple also claimed that their compliance with the EU’s DMA, which came into force in March 2024, “has led to users being able to install pornography apps and apps that facilitate copyright infringement, things its curated App Store aims to prevent”.
Regardless of how one feels about any of the specifics, there’s a broader question here that seems to go unasked: When did preventing things by design become socially desirable?
If someone is infringing copyright, or violating other laws, that is what law enforcement is for. The clue is in the name. Why are tech companies in the law enforcement business? Should they be?
Is Apple preserving a tech monopoly or protecting its users? Maybe if they were prevented from charging a tax on developers for using their platform, we’d find out.
And finally…
Wired just published The WIRED Guide to Protecting Yourself From Government Surveillance. It’s worth taking a look, especially if you’re in the United States.
That’s it for this week’s Weird, everyone. I hope you enjoyed it.
Outro music is Baraye by Shervin Hajipour, a protest song against the Iranian regime. Shervin was arrested after releasing it.
For the sunrise after a long dark night…
For freedom.
Stay sane, friends.
Wow, "what a tangled web we weave..." that continues to weave and weave...to what end indeed...