TikTok has previously faced accusations over cyber-bullying, damage to children's brains and Chinese propaganda but now it faces a new claim: that it has a "Nazi problem".
Neo-Nazis and white supremacists are sharing Hitler-related propaganda and recruiting new members on TikTok, and the platform's algorithm is "promoting" the content, said Wired.
Nazism as a solution to contemporary issues
A report from the Institute for Strategic Dialogue (ISD) uncovered a network of 200 Nazi accounts that produced videos promoting Nazism or used Nazi symbols in their profiles.
Researchers from the global thinktank found that accounts are gathering "tens of millions of views" on the platform with messages about Holocaust denial, the glorification of Hitler, and "Nazism as a solution to contemporary issues". There were also posts expressing support for white supremacist mass shooters, including footage of their crimes.
TikTok is "failing to take down violative videos and accounts", found the report, even when the content is reported by users. For example, the ISD reported 50 accounts that violated TikTok’s community guidelines around hateful ideologies, promotion of violence, Holocaust denial, and other rules. But all 50 accounts remained active, with responses to the reporting finding there had been "no violation".
In fact, accounts and videos promoting Nazism are "algorithmically amplified", said the researchers, with TikTok quickly recommending such content to users engaging with similar far-right hate speech.
Activists are beavering away to boost their content. Nazi videos end up being prominent on TikTok because there's "cross-platform coordination with activists" on other platforms to reach wider audiences with their short-form video content, noted Euronews.
While groups promoting neo-Nazi narratives are "typically siloed" in more fringe platforms, like Telegram, the messaging app is now more of a springboard to share videos to be promoted on TikTok, said Wired. White supremacist groups share videos, images, and audio tracks, "explicitly telling other members to cross-post the content on TikTok".
Meanwhile, a broad network of accounts appeared to be "actively helping each other" through liking, sharing, and commenting on each other's accounts in order to increase their viewership and reach, said the tech outlet.
Broader disinformation problem
"In no way is this particularly surprising," Abbie Richards, a disinformation researcher specialising in TikTok, told Wired, as she has noted similar things "time and time again" in her own research.
The ISD's findings show that "a small number of violent extremists" can "wreak havoc on large platforms due to adversarial asymmetry", said Adam Hadley, executive director of Tech Against Terrorism, which "underscores the need for cross-platform threat intelligence supported by improved AI-powered content moderation".
But Marcus Bösch, a Hamburg University researcher who monitors TikTok, told the outlet that it might not be that simple. The platform says it has around 40,000 content moderators, so although "it should be easy to understand such obvious policy violations", due to the "sheer volume" of content, and "the ability by bad actors to quickly adapt", the entire disinformation problem "cannot be finally solved, neither with AI nor with more moderators".
A TikTok spokesperson said that "hateful behaviour, organisations and their ideologies have no place on TikTok, and we remove more than 98% of this content before it is reported to us".
0 Commentaires