California has passed a set of laws that aim to moderate the spread of misinformation and deceptive digital audio or visual content pertaining to the upcoming election. The introduction of advanced generative artificial intelligence has made that threat more pervasive, but California's new regulations will hold social media platforms accountable for what the platforms allow to spread.
Targeting political deepfakes
California will now impose stricter restrictions that require social media companies to moderate the spread of misinformation about the election created with artificial intelligence, known as deepfakes, after Gov. Gavin Newsom (D) signed a batch of new laws targeting the technology. Of the five laws he signed, three are directly related to the election and deepfakes. Only one of the laws will go into effect before the 2024 presidential election, but the "trio could offer a road map for regulators across the country who are attempting to slow the spread of the manipulative content powered by artificial intelligence," said The New York Times.
The signed measures include A.B. 2839, which expands the time period that outlaws people or groups from knowingly posting deceptive AI-generated or manipulated content about the election. The law, which goes into effect immediately, was previously enforceable for 120 days before the election but will now extend to 60 days after. He also signed A.B. 2655, which will require social media companies to remove or label deceptive or digitally altered AI-generated content within 72 hours of a complaint, and A.B. 2355, which requires election advertisements to disclose whether they use AI-generated or manipulated content.
California's laws are the latest in the efforts of "dozens of states to limit the spread of the AI fakes around elections and sexual content," said the Times. While some laws similarly target election-related content, "most are focused on deepfake pornography." There are no federal regulations for deepfakes, but such regulations have been proposed several times. California's new laws are "very different from other bills that have been put forth," Ilana Beller, an organizing manager for the democracy team at Public Citizen, said to the Times. "This is the only bill of its kind on a state level."
Newsom draws backlash
The upcoming election has already been tinged with the looming threat of misinformation and misleading deepfake content. Republican candidate and former president Donald Trump shared deepfakes of Taylor Swift and her fans, the Swifties, implying they supported him. One image riffed on old military recruitment posters, with Swift pointing and a caption saying, "Taylor wants you to vote for Donald Trump." The incident led to Swift publicly endorsing his opponent, Vice President Kamala Harris. Trump also shared AI-generated images of himself surrounded by Black people that "purported to demonstrate his support among Black voters," said Fortune. Another image he shared on X resembled Harris speaking at the DNC surrounded by communist flags. The latter image would "likely be the sort of thing that would be covered by California's new laws," the outlet said.
After signing the bill, Gov. Newsom drew the ire of tech mogul Elon Musk. In July, the governor vowed to sign legislation to crack down on political deepfakes after Musk posted an altered campaign video of Harris. After the bills were signed, Musk targeted Newsom in a scathing post on his X platform, implying that he "signed a LAW to make parody illegal." He then said California needed "new leadership" and urged his following to make the Harris deepfake video "viral." The man who made the viral video with AI-generated audio clips of Harris calling herself the "ultimate diversity hire" sued to block two new laws, arguing that they threatened the right to freedom of speech.
0 Commentaires