BOSTON – YouTube announced on Monday that it would remove misleading election-related content that could pose a “serious risk of tremendous damage”. For the first time, the video platform has extensively explained how to deal with such political videos and viral falsehoods.
The Google-powered website, which previously introduced various policies to deal with incorrect or misleading content, released the full plan on which Iowa Assembly begins to state who will vote for their favorite Democratic presidential candidate.
"In recent years, we have stepped up our efforts to make YouTube a more reliable source of news and information and an open platform for healthy political discourse," said Leslie Miller, vice president of government affairs and public policy for YouTube, said in one Blog entry. She added that YouTube would enforce its policies "regardless of a video's political standpoint".
The move is the latest attempt by technology companies to deal with online disinformation that is likely to increase before the November election. Last month, Facebook said it would remove videos that had been altered by artificial intelligence in a way that was intended to mislead viewers, although it also said it would allow political advertisements and not monitor them for truthfulness. Twitter has completely banned political advertisements and has stated that it will largely not silence political leaders' tweets, although it may refer to them differently.
YouTube faces a daunting task in dealing with election-related disinformation. More than 500 hours of video are uploaded to the site every minute. The company also has disputes fears that his algorithms will push people to radical and extremist views by showing them more of this type of content.
YouTube announced in its blog post on Monday that it would ban videos that would provide users with an incorrect voting date or incorrect information about participating in the census. It would also remove videos that tell lies about citizenship status or eligibility for public office. An example of a serious risk could be a video that has been technically manipulated to give the impression that a government official is dead, YouTube said.
The company added that it was terminating YouTube channels that were trying to masquerade as another person or program, to hide their country of origin, or to hide a connection with the government. Videos that increase the number of views, likes, comments and other key figures using automated systems are also switched off.
YouTube will likely face questions as to whether it will consistently apply these guidelines as the election cycle progresses. Like Facebook and Twitter, YouTube faces the challenge that there is often no one-size-fits-all method to determine what a political message is and what kind of speech crosses the line of public deception.
Graham Brookie, director of the Atlantic Council's Digital Forensic Research Lab, said that while the directive gives "more flexibility" in responding to disinformation, the responsibility lies with YouTube, "particularly in defining YouTube's authoritative voices." an upgrade or the thresholds for removing compromised videos like deepfakes. "
Ivy Choi, a YouTube spokeswoman, said that the context and content of a video would determine whether it was removed or allowed to stay. She added that YouTube would focus on videos that were "technically manipulated or manipulated in ways that mislead users, beyond clips that were taken out of context".
As an example, she cited a video that went viral last year from spokeswoman Nancy Pelosi, a California democrat. The video was slowed down to make it look like Ms. Pelosi was covering up her words. According to YouTube guidelines, this video would be turned off because it was "technically manipulated," Ms. Choi said.
A video in which former Vice President Joseph R. Biden Jr. responded to a New Hampshire voter and incorrectly indicated that he was making racist statements could be kept on YouTube, Ms. Choi said.
She said deepfakes – videos manipulated by artificial intelligence to make the motifs look different, or words they didn't really say – would be removed if YouTube found that they were malicious. But whether YouTube turned off parody videos again depends on the content and context in which they were presented, she said.
Renée DiResta, Stanford Internet Observatory's technical research director investigating disinformation, said YouTube's new policy has attempted to "address what it perceives as a newer form of harm."
"The disadvantage here and there, where the lack of context differs from a TV commercial with the same video, is that social channels provide information to people who are most likely to believe them," added DiResta.