The Weekly Weird #31
TikTok and Europe's far-right bloc, from Russia with love (of fake news), AI likes 'em young, Nixon was right (but early), football humour
Here we are again, at the smouldering tail of a fiery week, staggering besooted and smoke-lunged into a safe room where we can shut the door, shake off the ashes, and laugh about what’s going on out there.
And wowzer, is there stuff going on out there.
We’ll wave aside the British general election happening today, in which Labour is expected to win a historic landslide victory and strike a generational wipe-out blow to the musty husk of the Conservative Party.
We’ll shrug off the surprise French election this Sunday in which an alliance called the New Popular Front has cut inter-party deals to try to prevent a Le Pen government.
We’ll ignore the ongoing kerfuffle in the United States, where President Biden is continuing to resist calls for him to stand aside in favour of a more compos mentis candidate.
Our fun is to be had elsewhere, at the margins of madness, if you will.
TikTok and Europe’s Far-Right Bloc
VSquare have published an investigation into the role that TikTok has played in Europe’s swing to the right, as shown by the European Parliament election results and more recently the success of France’s Rally National in the first round of voting.
Seven journalists from across Central and Eastern Europe teamed up to report on the latest flood of crazy hitting the kids through their touch screens.
The lede:
TikTok feeds in Central Eastern European countries, including the Czech Republic, Slovakia, Estonia, Romania and Poland, have been filled with conspiracy theories and fear-mongering resembling Russian disinformation — and to the benefit of the far-right.
Is there an epidemic of social media-enabled mindpaste getting piped into the susceptible brains of Europe’s youth? Or is this the hand-wringing response one has come to expect in the aftermath of a successful showing from populist and nationalist parties, in which it is never mentioned or considered as a possibility that the voting public simply preferred what they had to offer and chose them because of it? I genuinely don’t know, and the truth might be a combination of both.
An example of the difficulty in knowing which direction this is coming from is in the following paragraph on the situation in Germany:
Although TikTok claims to have safety protocols in place, analysts have criticized the platform for allowing fake accounts to manipulate user engagement and content virality. In Germany, the right-wing party AfD seems to be taking advantage of this. AfD has a strong pro-Russian stance, and its politicians have been endorsed by Russian President Vladimir Putin. The party’s content performed exceptionally well on TikTok, especially in the months leading up to the European Election. Analysis shows that AfD’s content received significantly more views and interaction than that of other competing parties. A key demographic for AfD on TikTok is teenagers — who are now eligible to vote in Germany’s European elections.
Writing about the Czech Republic, using the example of a particular influencer posting from Moscow in Czech, the reporters provide a similarly ambiguous tone:
TikTok’s algorithm is a powerful tool that swiftly gauges user preferences and promotes potentially viral content. In doing so, throughout the Czech Republic and Europe, it has also inadvertently allowed political activism, ideological extremism, hate speech, and propaganda to thrive on the platform. Understanding this phenomenon is a complex task, as TikTok’s data access for researchers is limited, necessitating a combination of API data, web scraping, and manual data collection for investigations.
Disentangling political activism and propaganda is tricky at the best of times, but one is left with the impression that criticism might not be so forthcoming against mainstream parties using the same tools to spread different views not considered problematic by the authors.
The Romanian segment suffers similarly. It begins with a telling introduction:
Claudiu Târziu, one of the newest Romanian members of the European Parliament, expressed concern about the European Commission’s alleged push towards a “federal superstate” in a TikTok video.
Then it elaborates:
Claudiu Târziu, one of the leaders of the AUR party, is associated with a network of accounts that spreads disinformation, conspiracies, and Russian propaganda on TikTok, inciting hatred. Context.ro analyzed the content posted on TikTok by seven MEPs from AUR and SOS and found that 20 to 42% of their videos contained toxic messages.
Is expressing concern over the EU’s movement towards a “federal superstate” one of those “toxic messages”?
Is the push towards a federal Europe really just an allegation?
This is from Wikipedia:
R. Daniel Kelemen of Rutgers University said: "Unencumbered by the prejudice that the EU is sui generis and incomparable, federalism scholars now regularly treat the EU as a case in their comparative studies (Friedman-Goldstein, 2001; Filippov, Ordeshook, Shevtsova, 2004; Roden, 2005; Bednar, 2006). For the purposes of the present analysis, the EU has the necessary minimal attributes of a federal system and crucially the EU is riven with many of the same tensions that afflict federal systems."
Back to VSquare:
…over 41% of Claudiu Târziu’s TikTok content was flagged by the software as disinformation. His videos included exaggerations, conspiracy theories, fear-mongering, xenophobic or homophobic messages, manipulation, and false information about EU policies.
I don’t know this guy and I haven’t seen what he posts on social media. He might have awful opinions or hold unsavoury views. But based on the way the article frames what one could consider uncontroversial actions, like saying that the EU’s trajectory points towards a fully federal system, or a registered political party in Germany using social media to put out campaign information, it seems like the ‘Russian disinformation’ label could be coming from a more knee-jerk or cynical place.
Is it suspect that TikTok is run by a Chinese company under the ultimate control of the Chinese Communist Party? Sure. Does Russia engage in wide-ranging psychological and disinformation operations, often just to mess with other countries even if they don’t have a specific tactical aim in mind? Of course. Is the internet and social media awash in so much bullshit that James Cameron in a pressurised submersible probably couldn’t even find the bottom? Absolutely.
But does that make every criticism of dominant cultural and political narratives inherently problematic, just because they circulate online? No.
From my reading of the article, a telling omission was any definitive language as to the truth or falsehood of the information being shared. There are statistics (“63% of clips on the feed could be deemed disinformation”) and implied guilt (“social media is reflecting its more traditional counterpart: media sources affiliated with the Orbán government have for years wilfully amplified Russian propaganda narratives”), but everything is lumped together in a way that seems to state as a given that being called ‘far right’ by the media makes what you say disinformation and presupposes your alliance with Russian influence.
Even if TikTok is enabling a psy-op against Europe’s youth, the way the article frames it falls more in line with the increasing political pulpit-pounding over misinformation and disinformation, almost always because a lack of censorship and gate-keeping online is allowing ‘the people’ to have their fragile minds polluted by the wrong information, which is wrong not always because it is incorrect or inaccurate, but because of who decides it is wrong.
There may be something in the air across the newsrooms of the world’s media this week, as our next story also is a detailed hit piece calling out mis-, dis-, and malinformation.
From Russia With Love (Of Fake News)
A BBC investigation (published on the same day as the VSquare story above) “can reveal” that “A network of Russia-based websites masquerading as local American newspapers is pumping out fake stories as part of an AI-powered operation that is increasingly targeting the US election.”
The BBC give examples of news stories that were manufactured, shared online, and spread rapidly despite being untrue and proven false, and identify the nexus of these particular fake news stories as a network of websites appearing to be American news outlets but in fact being run by “a former Florida police officer who relocated to Moscow.”
The sixteenth and seventeenth paragraphs of the story read as follows:
Experts say that the operation is just one part of a much larger ongoing effort, led from Moscow, to spread disinformation during the US election campaign.
While no hard evidence has emerged that these particular fake news websites are run by the Russian state, researchers say the scale and sophistication of the operation is broadly similar to previous Kremlin-backed efforts to spread disinformation in the West.
Which researchers? Which experts? “Broadly similar”?
The BBC then explains that “disinformation merchants have had to get more creative” because of “state-backed disinformation campaigns and money-making ‘fake news’ operations [which] attracted attention during the 2016 US election campaign.”
This is where we get to the good bit:
The operation investigated by BBC Verify uses artificial intelligence to generate thousands of news articles, posted to dozens of sites with names meant to sound quintessentially American – Houston Post, Chicago Crier, Boston Times, DC Weekly and others. Some use the names of real newspapers that went out of business years or decades ago.
Most of the stories on these sites are not outright fakes. Instead, they are based on real news stories from other sites apparently rewritten by artificial intelligence software.
How could the BBC know whether AI was used to rewrite existing news stories?
In some instances, instructions to the AI engines were visible on the finished stories, such as: “Please rewrite this article taking a conservative stance”.
This is the AI user equivalent of George Bush’s infamous teleprompter stage direction reading of “Message: I care.”
How totally did these “disinformation merchants” fail at covering their tracks? Here’s a screenshot courtesy of the BBC:
The fake news outlets that used AI to create fake versions of news stories even scraped profile pictures to add to the bylines of their fake journalists.
For instance, a photo of best-selling writer Judy Batalion was used on multiple stories on a website called DC Weekly, “written” by an online persona called “Jessica Devlin”.
For the record, Judy Batalion is a Tier 1 badass name, up there with George Carlin’s creations Biff Webster and Chuck Steak.
The fakening deepens: The bullshit factory run by a Florida man in Moscow even creates fake videos to cite as sources for fake stories on fake websites.
To further bolster the credibility of the fake stories, operatives create YouTube videos, often featuring people who claim to be “whistleblowers” or “independent journalists”.
In some cases the videos are narrated by actors – in others it appears they are AI-generated voices.
Several of the videos appear to be shot against a similar-looking background, further suggesting a co-ordinated effort to spread fake news stories.
The videos aren’t themselves meant to go viral, and have very few views on YouTube. Instead, the videos are quoted as “sources” and cited in text stories on the fake newspaper websites.
By the way, note the sleight-of-hand in which people or staff or workers become “operatives”, a much more military-sounding designation that just reeks of government handling.
Deep in the bowels of the story, “one of the people” is identified as John Mark Dougan, the “former Florida police officer” mentioned at the beginning of the story. After several paragraphs of background on him, his personal history, and his activities online and in Russia, a fascinating twist:
“I think it's important not to overplay his role in this campaign," said Darren Linvill, co-director of Clemson University’s Media Forensic Hub, which has been tracking the network. “He may be just a bit of a bit player and a useful dupe, because he's an American.”
Despite his appearances on state-run media and at government-linked think tanks, Mr Dougan denies he is being paid by the Kremlin.
“I have never been paid a single dime by the Russian government,” he said via text message.
Obviously his denial could just be a cute literal loophole, since he’d most likely be paid in roubles, not dollars and cents. Or maybe he doesn’t work for the Russian government and isn’t even that big a part of the disinformation landscape, in which case placing him at the centre of the story is kind of odd.
Quotes throughout the story from Chris Krebs, Clint Watts, and Nina Jankowicz, all (in)famous faces from the Twitter Files, Censorship-Industrial Complex exposés by Racket News, and the ill-fated Disinformation Governance Board, don’t bode well for the diagnosis of all this as a disinterested and objective piece of reporting.
One of the final sentences in the article is equally telling.
YouTube removed an AI-narrated video posted by an obscure channel that was used as the source of the false story after it was flagged by BBC Verify.
Cool. So BBC Verify are now in the business of having content removed from the internet, while also using as sources the very people who were at the centre of a scandal in the US involving government working hand-in-glove with private entities to circumvent First Amendment limits on state censorship.
Don’t get me wrong: I dislike the idea of an automated AI system repurposing media content to churn out partisan propaganda at scale. Here at the Weird we’ve looked at the issue of AI pollution invading every corner of the internet, and it is gross. Even with my gag reflex activated by fake pictures of fake journalists faking fake news stories to create fake outrage, I’m no less queasy at the prospect of what appears to be another round of ginning up public disapproval to push for some sort of mass censorship campaign to protect the internet from an unfortunate aspect of having a free internet: People with ideas of highly variable merit trying to get other people to believe them in order to affect the choices they make.
Speaking of gross stuff involving AI…
AI Likes ‘Em Young
From Biometric Update:
Personal photos of Australian children are being used to train AI through a dataset that has been built by scraping images from the internet – exposing kids to the risk of private information leaks and their images being used in pornographic deepfakes.
Eew.
The images of the children were collected without the knowledge or consent of their families and used to build the Laion-5B dataset, according to findings from human rights organization Human Rights Watch (HRW). The photos were then used by popular generative AI services such as Stability AI and Midjourney, The Guardian reports.
HRW claims that AI tools trained on the dataset were later used to create synthetic images that could be categorized as child pornography.
How big of a deal is this?
From The Guardian:
An analysis of less than 0.0001% of the 5.85bn images contained in the Laion-5B dataset, used by services such as Stable Diffusion creator Stability AI and Midjourney, found 190 photos of Australian children scraped from the internet.
That percentage suggests that there are a tremendous number of child photos involved here. Double eew.
Laion, a German nonprofit, responded with the sensitivity typical of technologists: “We would like to reiterate …the fact that the most effective way to increase safety is to remove private children[’s] info from [the] public internet.”
Right, just don’t put pictures of your kids online and they won’t end up as grist for a generative AI mill cranking out spank fodder for pederasts. Easy. Why would you upload child photos if you didn’t want them to end up in a grotty video on the dark web?
Or is it a wee bit more complicated than that?
The Guardian again:
Hye Jung Han, HRW’s children’s rights and technology researcher, told Guardian Australia the photos were being lifted from photo and video sharing sites, as well as school websites.
“These are not easily findable on school websites,” she said. “They might have been taking images of a school event or like a dance performance or swim meet and wanted a way to share these images with parents and kids.
“It’s not quite a password-protected part of their website, but it’s a part of the website that is not publicly accessible, unless you were sent the link.
“These were not webpages that were indexed by Google.”
HRW also found an unlisted YouTube video of schoolies celebrations in the dataset. Such videos are not searchable on YouTube and scraping YouTube is against its policies, Han said.
Here’s a BBC segment on the issue from 2023 (the relevant bit is from 2:28):
The media critic in me raised an eyebrow at this from the presenter: “We didn’t pay to subscribe and we didn’t view any illegal images. We could see the comments and hashtags.” But let’s set aside my cynicism for now and presume that everything is on the level and this stuff is out there in the way they say it is. If people do this stuff with AI, are they less likely to hurt real children?
There’s a fascinating argument to be made here about relative harms, keeping predators off the streets, that sort of thing, and it helps to be open-minded and not prudish when dealing with the unfortunate realities of humanity’s darker urges. That said, the ick factor is off the scale.
Octavia Sheepshanks, who features in the BBC segment, recently wrote an article for The Critic about exactly this, called Artificially Generated Child Sexual Abuse Material is not a victimless crime.
Maybe we can have her or someone else steeped in this topic on the podcast for a discussion about it.
On a brighter note…
Nixon Was Right (But Early)
Who can forget the notorious moment in the Frost/Nixon interview when Richard Nixon, in all seriousness, says the immortal line: “When the president does it, that means that it is not illegal.”
“By definition?” asks David Frost with impeccable poise.
What a journalistic coup it was, getting the former President to say something so arrogant, so obviously wrong, so detrimental to the cause of liberty that America claims to represent. It was plain to anyone watching that Nixon had a bizarre and unrealistic concept of the presidency, rooted in a power-hungry vision of total impunity, which was as laughable as it was worrying.
Fast-forward 47 years:
Under our constitutional structure of separated powers, the nature of Presidential power entitles a former President to absolute immunity from criminal prosecution for actions within his conclusive and preclusive constitutional authority. And he is entitled to at least presumptive immunity from prosecution for all his official acts. There is no immunity for unofficial acts.
The above is from the US Supreme Court decision in Trump v. United States. “Enjoy” the full text of the decision here.
In “the first criminal prosecution in our Nation’s history of a former President for actions taken during his Presidency,” the Court’s decision makes it clear that Nixon was right, but early.
America needed 47 years of decay, hubris, institutionalised self-interest, and complacency to make presidential impunity the law.
For all those rending their garments and gnashing their teeth about Trump, a warning. Presidents change more frequently than legal precedents. The chances of this getting overturned in the future are super-slim, and the circumstances in which behaviour would have to be ostentatious enough to motivate such a shift are, for a modern democratic nation, unthinkable.
Secret wars and covert ops? Waved through. Warrantless wiretapping? No problemo. “Kill lists” and the killing of US citizens abroad without a trial? Done. Redefining “due process” so that it is distinct from “judicial process”? All good.
It took Trump to make it official: The President of the United States of America is above the law.
Whether it is Ol’ Mushroom Tip or some other haircut-rockin’ smile-flashin’ flesh-pressin’ so-and-so, it’s in the rule book. No fouls. No red cards. The cheat code has been entered and God Mode is activated.
At least with respect to the President’s exercise of his core constitutional powers, this immunity must be absolute. As for his remaining official actions, he is entitled to at least presumptive immunity.
[…]
When the President exercises [his] authority [to act], Congress cannot act on, and courts cannot examine, the President’s actions. It follows that an Act of Congress—either a specific one targeted at the President or a generally applicable one—may not criminalize the President’s actions within his exclusive constitutional power. Neither may the courts adjudicate a criminal prosecution that examines such Presidential actions. The Court thus concludes that the President is absolutely immune from criminal prosecution for conduct within his exclusive sphere of constitutional authority.
Trump was reviled for saying: “I could stand in the middle of Fifth Avenue and shoot somebody and I wouldn’t lose any voters.”
Now we know he wouldn’t even get arrested, and neither would any other president.
What could go wrong?
At least a president’s “unofficial” acts aren’t granted absolute or presumptive immunity. That’s a small mercy.
Oh, wait…
From the dissent by Justice Sotomayor, joined by Justices Kagan and Jackson (emphasis mine):
In fact, the majority’s dividing line between “official” and “unofficial” conduct narrows the conduct considered “unofficial” almost to a nullity. It says that whenever the President acts in a way that is “‘not manifestly or palpably beyond [his] authority,’” he is taking official action.
[…]
It then goes a step further: “In dividing official from unofficial conduct, courts may not inquire into the President’s motives.” It is one thing to say that motive is irrelevant to questions regarding the scope of civil liability, but it is quite another to make it irrelevant to questions regarding criminal liability. Under that rule, any use of official power for any purpose, even the most corrupt purpose indicated by objective evidence of the most corrupt motives and intent, remains official and immune. Under the majority’s test, if it can be called a test, the category of Presidential action that can be deemed “unofficial” is destined to be vanishingly small.
A new era in American politics has begun. Make some popcorn.
Football Humour
Here’s a fun little clip of an England fan grilling AI Steve (created by the ‘Real’ Steve Endacott, our guest in Episode 116) about the team’s prospects in Euro 2024.
That’s it for this week, everyone.
Stay sane, friends.
Outro music is the late great Charles Bradley with The World (Is Going Up In Flames).
This world is going up in flames
and nobody wanna to take the blame
Thanks for another rollicking addition of the Weekly Weird! Hope you had a wonderful Independence Day!