The Weekly Weird #22
US Senate simps for surveillance, brand new robots, New Zealand's biometrics 'consultation', China poops on speech
Welcome back to your regular romp through the rustling reeds of unreason!
Before we get stuck in, just a quick reminder that Episode 112 is out this coming Sunday. I do a deep dive on surveillance in the UK with Mark Johnson from Big Brother Watch. He gives a great overview of the ways in which civil liberties are being encroached upon, so I hope you check it out.
Also, FeedSpot have released their updated Top 100 Dystopian Podcasts list, and 1984 Today is at #10!
Well, we’ve got plenty to cover so let’s hit it…
US Senate Simps For Surveillance
The US Senate approved the extension and expansion of FISA (see also WW#21) in an after-hours session on Friday night, over the objections of critics and with the fabricated excuse of time pressure.
CNN reported on the vote while repeating the well-worn lie that a failure to extend the programme by April 19 would cause it to lapse:
President Joe Biden on Saturday signed a bill that reauthorizes a key surveillance authority after the Senate passed the legislation late Friday night, avoiding a lapse in the controversial program.
From
at Racket News:Heading into the weekend, it was widely reported that unless the Senate reauthorized section 702 of the Foreign Intelligence Surveillance Act, which among other things allows the government to collect communications of Americans without a warrant, an April 19th deadline would expire.
[…]
This was all fake. The law was already extended. On April 5th of this year, Joe Biden’s Department of Justice effectively granted itself a one-year extension of FISA, meaning the real deadline was April of 2025. Illinois Democrat Dick Durbin and others repeatedly announced the fact, even on the Senate floor, but press didn’t report it.
Here is the relevant excerpt from Senator Dick Durbin’s statement on April 18 about the already-extended-and-therefore-not-at-all-urgent FISA bill:
In fact, the U.S. Department of Justice has already obtained a fresh one-year certification from this Court to continue Section 702 surveillance through April of 2025… there is no need for the Senate to swallow whole a House bill that expands—rather than reforms—Section 702.
You can also take a look at reporting from The New York Times, The Register, and Wired, all of which include mention of the Biden administration’s extension of Section 702 to April 2025 through a court rather than Congress.
Critics of that approach favoured reforms to the bill that would protect American citizens by requiring a warrant for accessing their communications.
From The New York Times:
Elizabeth Goitein of the Brennan Center for Justice at New York University School of Law, who has urged Congress to require officials to obtain warrants before searching for information about Americans in the messages swept up by the program, portrayed the government’s move as cynical. The government, she added, did not have to seek an extension that lasted a full year.
“This shows the government’s utter contempt for the role of Congress and the democratic process when it comes to FISA and Section 702,” she said, referring to the Foreign Intelligence Surveillance Act. She added, “The government isn’t trying to prevent a gap; it’s trying to sneak through an additional year of surveillance without congressional approval.”
Writing in February of this year, The Register quoted Senator Ron Wyden:
“It is utterly ridiculous that the Biden Administration and the Justice Department would rather risk the long-term future of an important surveillance authority than support a single meaningful reform to protect Americans' rights”
It seems that, after securing an extension of the law in court, the Biden administration and complicit media outlets then supported a false narrative of the bill’s expiry to motivate a rush through Congress that legitimises the current version of the bill and extends it even further.
The Associated Press even put the lie in their headline on April 20: Biden signs bill extending a key US surveillance program after divisions nearly forced it to lapse.
More from Senator Durbin, speaking before the bill’s passage:
Congress has a responsibility to the American people to get this right. I recognize the importance of Section 702, but we should not rubberstamp the House’s flawed bill where surveillance is already authorized until April 2025. I want to respect the need for Section 702, but I’m sworn to respect the need to follow this Constitution. Without critical changes to improve this bill, I cannot support it.
Congress has now approved Section 702 with no warrant requirement limiting the ability of the government and intelligence agencies to gather the communications of American citizens. As mentioned in last week’s Weird, the bill also expands the definition of electronic communications to include more material and allow government agencies to force participation in surveillance on a broader cross-section of businesses and civilians.
On April 21, a ‘guest essay’ was published by The New York Times applauding Congress for this new infringement on privacy with a truly incredible title.
I’ll take “Orwellian Headlines” for $500, please, Alex.
The authors have the audacity to call the bill “the most expansive privacy reform to the legislation in its history.”
From the essay:
Although Section 702 can be used only to target foreigners abroad, it does include Americans when they interact with foreign targets. Not only is such incidental collection inevitable in today’s globalized world; it can be vital to U.S. security. If a terrorist or spy abroad is communicating with someone here, our government must find out why.
If only there was some sort of foundational legal document outlining the rights of citizens and the limits on government power which could clarify the best way to thread this particular legislative needle…
If only.
The authors of the Times essay, Waxman and Klein, perform admirable logical empretzelment to yada-yada the whole Constitution thing (emphasis mine):
Some of the bill’s critics argued that the F.B.I. should be required to obtain a warrant from a special FISA court before using the information collected under 702 when investigating Americans who may be involved in terrorism, espionage or other national security threats. But requiring such a warrant would have been unnecessary and unwise.
Getting a FISA court order is bureaucratically cumbersome and would slow down investigations — especially fast-moving cybercases, in which queries have proved especially useful. It would cause agents to miss important connections to national security threats. And because this information has already been lawfully collected and stored, its use in investigation doesn’t require a warrant under the Constitution.
Firstly, being cumbersome and slowing down investigations is part of the purpose of the Fourth Amendment. In accessing information about citizens, the government should be encumbered, and the ability of investigators to jump to conclusions by rushing through the process to grab whatever evidence supports their thesis should be slowed down. The argument against the Constitutional approach is made by the TV show 24, in which Kiefer Sutherland brutalises his way through a conveyor belt of civilians to find out what he needs to know while a ticking clock reminds the audience that he has to be torturing these people or else.
Secondly, let’s revisit that last sentence in the quote: “And because this information has already been lawfully collected and stored, its use in investigation doesn’t require a warrant under the Constitution.”
The Bush administration passed Section 702 in 2008 specifically because they had been getting into trouble for unlawful warrantless wiretapping and wanted to regularise their procedure.
From The New York Times:
Enacted in 2008, the law legalized a form of the once-secret Stellarwind program, which the Bush administration created after the Sept. 11, 2001, terrorist attacks.
I’m not a lawyer, but as I understand it, the main function of the Supreme Court is to measure existing laws and judgements against the yardstick of the Constitution. The Fourth Amendment clearly requires a warrant for “searches and seizures,” so while it is legally accurate that Section 702 has not been tested by the Supreme Court and deemed unlawful, it does not follow that the existence of the law obviates the limits placed on government by the Constitution.
The essay’s authors are saying, in effect, that since the government has the data, and getting it is covered by a law that doesn’t require them to comply with the Constitution, they can use that data as they please since the Constitutional question has therefore been resolved.
You have to be very well-educated to be this stupid.
The authors knock it out of the park with a final argument against the straitjacket of obtaining a warrant:
Another problem is that the probable cause needed for a warrant is rarely available early in an investigation. But that’s precisely when these queries are most useful. Database checks allow an agent to quickly see whether there is a previously unnoticed connection to a foreign terrorist, spy or other adversary.
How will we know who the criminals are if we can’t check everyone, in case some of them are criminals?
The Times apologia for renewed government surveillance doesn’t even deal with the expansion of that surveillance.
Senator Ron Wyden objected to the bill during the debate:
There is a central question before the United States Senate, and that is: Who should be forced to help their government spy? The legislation coming from the [House of Representatives] gives the government unchecked authority to order Americans to spy on behalf of their government.
The warrantless “unchecked authority to order Americans to spy on behalf of their government” is now the law of the land (of the free).
Brand New Robots
Boston Dynamics, a robotics company that has wowed the internet for years now with clips of their creations, has retired their flagship Atlas robot in favour of a new model. They said farewell in a genuinely amusing reel of its greatest hits (and misses).
Two new videos showcase their latest innovations.
Spot, their robot dog with an articulating ‘gripper’ for a head, has been modified with ChatGPT to provide tours of their facility. The flapping of the gripper gives it an eerie sock-puppet aspect which almost takes the edge off of the footage of an autonomous robot speaking to people with various ‘personalities’, including (unprompted) describing previous models of Spot as its parents.
Their video showing a glimpse of their all-new Atlas robot:
The cycloptic camera-head is pure nightmare fuel, right?
Onwards…
New Zealand’s Biometrics ‘Consultation’
As is often the case with governments that want to pass a new law but need their citizens to feel like it was open to debate before it got rammed through, the Privacy Commissioner of New Zealand (PCNZ) is holding a ‘consultation’ so that Kiwis can “have their say” about proposed biometrics regulations.
The above graphic summarises the PCNZ’s “three main points,” which are the headlines they want the public to know and think about in relation to the proposed legislation. They also have a one-page ‘exposure draft’. Take a look and see if you can find the word “consent”. I’ll wait.
You, of course, did not. That’s the kind of pesky detail that would just bog down the simplified accessible version that most people will probably look at. Why complicate matters?
The PCNZ have, however, produced a full 57-page document going into detail on the nature, inclusions, and omissions they’re proposing for the biometrics regulations, which is accessible on their website. The trouble starts on page 8, when they describe how the proposal has changed since the last consultation (emphasis mine):
We’ve changed three things, which we’ll detail below.
Consent is included as a privacy safeguard not a general requirement.
We’ve focussed on three main modifications.
We’re not restricting biometrics for marketing.
No general requirement for companies to obtain consent when taking biometric data from people? Why would they not want general consent?
In a change from our proposals last year, we’ve decided not to add a standalone general consent requirement for two main reasons.
It wasn’t practical. For consent to be meaningful, people need to be able to make an informed choice. It proved difficult to create a reasonable and meaningful consent requirement that worked in broad range of contexts, like when there’s no interaction with the person (biometrics is collected at a distance) or in situations like employment (power imbalance). The exceptions required to make the rule work meant that it began to lose value. The Privacy Act doesn’t always require consent so this made it hard to reference a model that would work well.
We thought a general consent requirement wasn’t the best tool for the job. Consent places a burden on people and with busy lives, there’s a risk it would be overlooked by consumers. Recent scholarship backed this up.
The '“recent scholarship” they footnoted is a 2019 paper called The Pathologies of Digital Consent which, in the abstract, seems to contradict the reasoning used by the PCNZ:
…the pathologies of consent reveal how consumers can be nudged and manipulated by powerful companies against their actual interests, and that this process is easier when consumer protection law falls far from the gold standard.
The ‘gold standard,’ the authors rightly point out, is when “agreements [are] between parties who have equal bargaining power, significant resources, and who knowingly and voluntarily agree to assume contractual or other legal obligations.”
The authors point out that this is rarely the case when it comes to consent in the digital realm:
Think about your own agreements with the social networks you use, the apps you install on your phone, or the Amazon Alexa that might sit, listening, in your kitchen or bedroom. Do you know what you agreed to? Have you read the agreements? Did you have a meaningful choice? While the answer to these questions is usually “no,” the dominant legal regime that applies in the United States is that the terms and conditions of these services are valid as long as there is some kind of “notice and choice” to consumers. In practice, and as enforced with occasional exception by the Federal Trade Commission (FTC), notice-and-choice models can be legally sufficient even if the notice is buried somewhere in a dense privacy policy, and the choice is take-it-or-leave-it—accept what a company wants to do with your data or not use the service at all.
The PCNZ’s invocation of the paper seems to be at odds with the authors’ stated position:
Let us be clear about our claim: We are not arguing for a wholesale rejection of consent. A legal system without consent would be so radically different from what we have that it would be almost unimaginable. More fundamentally, we believe that consent should retain its prominent place in our law generally. Our argument is more nuanced. Consent is undeniably powerful, and often very attractive. But we have relied upon it too much, and deployed it in ways and in contexts to do more harm than good, and in ways that have masked the effects of largely unchecked (and sometimes unconscionable) power.
The authors expand on their position by bringing up the ‘privacy paradox’, which is “the idea that consumer anxiety about privacy is undermined by the fact that consumers act in privacy-diminishing ways in practice.”
They set out their objection to that idea (emphasis mine):
Understanding this phenomenon in terms of consent pathologies reveals that consumers are not hypocrites who say one thing but do another that reveals their true preferences. On the contrary, the pathologies of consent show how consumers can be nudged and manipulated by powerful companies against their actual interests, and this phenomenon is easier when the legal regime that purports to protect consumers falls far from the gold standard.
A well-reasoned paper explains why a lot of so-called ‘consent’ in the digital services sector is questionable (or in their words, ‘pathological’), so a national regulator uses that paper’s reasoning to absolve themselves of the responsibility of imposing a consent requirement on biometrics in their country because they claim that ‘recent scholarship’ shows that the consent obtained wouldn’t really be valid anyway.
Instead, the PCNZ has opted for a ‘privacy safeguard’ approach which will put the onus on companies to obtain “informed consent” from customers in specific instances, which explicitly institutionalises within their regulatory framework the exact power dynamics that the academic paper so eloquently decried as invalidating to the spirit and value of ‘gold standard’ consent.
Irony much?
Rather than trying to solve the problem of a less-than-gold-standard consent requirement by giving citizens the kind of information and power they would need to be more equal parties to tech companies in this field, New Zealand seems to have decided that the consent they could force companies to obtain wouldn’t be worth having, so why bother at all?
I didn’t want to misrepresent the position of the PCNZ in this segment, so I referred to their website to see if they had a press team, or someone who could answer some questions. I found this:
Our communications team can help you share information about our biometrics work. Email them at CommsTeam@privacy.org.nz.
I sent them the following questions:
This consultation says it has determined that general consent would be impractical and not the right tool for the job. However, the reason given seems to be that requiring general consent would make it difficult to use biometrics. Is it the position of the Privacy Commissioner that biometrics should/must be used? What are the statistically demonstrated benefits over current methods of identification?
Is the Privacy Commissioner familiar with the Biometric Information Privacy Act in the State of Illinois in the United States? It has provided protections and prevented misuse of personal data for the people of that state since it was passed, and is the basis for a number of lawsuits. Section 15 of the Act sets out the limitations on the gathering of biometrics clearly, with a consent requirement, and the result has not been an appreciable lack of safety, security, or services access in the State of Illinois. Does the Privacy Commissioner feel that citizens of New Zealand are less capable of providing informed consent than the citizens of the State of Illinois, or more at risk of a lack of services if such consent was made mandatory?
Biometrics, unlike passwords or ID numbers, cannot be changed. Once compromised, biometrics remain compromised forever. Why does the Privacy Commissioner believe that a permissive framework for the deployment of biometrics will be helpful and secure for the citizens of New Zealand?
Is there a demonstrated public appetite for the use of biometrics? Or is the interest in creating a framework led by industry rather than public desire?
Who made the decision to walk back a limit on the use of biometrics for marketing, and why?
Can the Privacy Commissioner state categorically that the government of New Zealand will not now or in the future seek to establish a central biometric database of citizens?
The consultation document refers to "high-risk and intrusive uses of biometrics". Given the unchangeable nature of biometric data, there is a strong case to be made that any gathering or use of biometric data is high-risk and intrusive, at least to the individual whose data is being taken and used. What would constitute a "low-risk" or "non-intrusive" use of biometrics?
The consultation states that "informed consent" is a safeguard, but also that general consent will not be required. What is the difference, in practical terms, between "informed consent" and "general consent"? How can an organisation which cannot be expected to obtain the latter be asked to obtain the former?
Their reply was swift, straightforward, and less-than-reassuring:
We won’t be answering any media enquiries on these questions.
The ‘consultation’ is open until 8 May and the email address for responses is biometrics@privacy.org.nz.
Feel free to crib from my questions if you want to respond before the deadline. Just don’t expect any answers…
China Poops On Speech
The Wall Street Journal reported this week on the latest act of censorship out of China:
China ordered Apple to remove some of the world’s most popular chat messaging apps from its app store in the country, the latest example of censorship demands on the iPhone seller in the company’s second-biggest market.
WhatsApp, Threads, Signal, and Telegram were among the apps ordered to be eliminated by the Chinese Communist Party “because of national security concerns.”
I don’t want to sound like I’m throwing shade, but Threads is a threat to your national security? Dude, weak.
Maybe too many people were comparing President Xi to Winnie the Pooh again.
The Cyberspace Administration of China asked Apple to remove WhatsApp and Threads from the App Store because both contain political content that includes problematic mentions of the Chinese president, according to a person familiar with the matter. The Apple spokesperson said that wasn’t part of the reasoning.
In case you missed it, Xi is obsessed with people making fun of him for looking like Winnie the Pooh, to the extent that the lazy honey-loving bear was eliminated from Chinese media with extreme prejudice a few years ago.
It’s true. A picture is worth a thousand words:
Personally, I don’t see it, but I’m all in favour of mockery as a check on granting gravitas to the powerful.
Apple claim to have tried pushing back on the takedown order, only to be told by the CCP “that it had to strictly implement the rules.” Apple caved, as they did the last time.
In 2017, Apple came under fire for removing dozens of apps enabling Chinese internet users to circumvent the Great Firewall from the country’s app store. It also took down thousands of videogame apps in 2020 after Chinese officials cracked down on gaming software without a government license.
The root of the issue is the antipathy the CCP has towards any means of citizens communicating with one another or organising themselves without the government as an intermediary or arbiter.
Chat messaging platforms have been a thorn in the side for Chinese authorities because they provide a channel for users to organize social movements, said Eric Liu, an analyst with China Digital Times, a website tracking Chinese censorship. These apps have shared news about China the government doesn’t like, such as about a protester on a bridge in Beijing in 2022 who was demonstrating against the country’s anti-Covid lockdowns, which first appeared on Telegram.
Apple was “China’s top smartphone maker last year, with more than 17% of the market,” a position perhaps aided in part by their willingness to give in to the CCP on issues of censorship.
Nothing new for China to crack down on means of communication, nor for Apple to enable them in their controlling tendencies. A mere three poops for this one. 💩💩💩
That’s it for this week, everyone. Thank you as always for reading.
Outro music is Brother Hao lip-synching to a house remix of Without The Communist Party, There Would Be No New China, the CCP earworm that Uyghur Muslims are forced to sing in re-education camps to get permission to eat (on pain of torture).
Stay sane, friends.