Commentary on Political Economy

Friday 27 August 2021

 ‘YouTube magic dust’: How America’s second-largest social platform ducks controversies
While Facebook and Twitter take the brunt of backlashes over misinformation and ‘censorship,’ the Google-owned video giant has often laid low. That may finally be changing.




A YouTube logo during LeWeb Paris 2012 in Saint-Denis near Paris. (Eric Piermont/AFP/Getty Images)
By Will Oremus
August 26 at 6:35 am Taiwan Time
The morning after Afghanistan’s capital fell to the Taliban last week, Facebook said it would continue to ban the group, while Twitter said it would “remain vigilant” as it prioritized safety.
Hours later that day — after the other social media companies made headlines — YouTube said it would also continue to ban Taliban accounts.
It’s not the first time YouTube lagged behind its rivals. Earlier this year, it was the last to suspend President Donald Trump following the Jan. 6 Capitol riot, and it has said the least about potential reinstatement. Three years ago, its parent company Google declined to send an executive to a Senate grilling on foreign election interference, leaving Facebook and Twitter’s leaders to face the music. And in 2017, it was the last of the major social media outlets to disclose evidence of Russian interference on its platform.
“Overall, it seems like Google’s strategy has been to keep their heads down and let Twitter and Facebook take the heat, and so far the media and political classes have rewarded that strategy,” said Alex Stamos, director of the Stanford Internet Observatory and Facebook’s former chief security officer.
In an era when tech giants control the largest global information networks, their decisions about who can speak and what they can say have massive geopolitical implications. It’s a responsibility that each of the major U.S.-based public social media platforms has come to understand and take seriously, to varying degrees. But for a constellation of reasons, YouTube’s content policies have tended to attract less media attention and scrutiny than those of Facebook or Twitter, experts say — even though nearly a quarter of U.S. adults say they get news from YouTube, according to the Pew Research Center. (Facebook serves as a news source for 36 percent of Americans, the highest share of any social platform, while Twitter is third at 15 percent.)
With more than 2 billion users worldwide, YouTube is the Internet’s dominant hub for user-created videos of all kinds — including those sharing political rants, coronavirus vaccine conspiracy theories and, in rare cases, live streams of terrorist threats or mass shootings. Researchers have identified the network as playing a major role in misinformation campaigns, including the effort to discredit the results of the 2020 U.S. presidential election, and its recommendation algorithms have been implicated in leading some users down a path of radicalization. Yet it has repeatedly ducked the brunt of backlashes that Facebook and Twitter have absorbed head-on.
To what extent that’s the result of a cunning public-relations strategy, a blinkered press corps or genuine differences in the companies’ products and policies depends on whom you ask. But all seem to be at work to some degree.
“I call it ‘YouTube magic dust,’ ” said Evelyn Douek, a lecturer at Harvard Law School who researches the regulation of online speech. “It’s just the only explanation for how they keep managing to stay off in the shadows while the other platforms take the heat.”
A former employee who worked on content policy issues at YouTube, who spoke on the condition of anonymity because of a nondisparagement agreement, offered a more quotidian theory.
“I think that part of it is a deliberate strategy,” the former employee said. “And another part of it is just the fact that statements and blog posts and big decisions can take forever to get through Google. Other companies are more nimble.”
[Today’s Taliban uses sophisticated social media practices that rarely violate the rules]
YouTube, for its part, suggested that the pattern was less clear-cut than it might seem. It didn’t deny that there have been times when YouTube was less communicative, less transparent or slower than some rivals to respond to content controversies. But it offered defenses for each example critics raised, and pointed to other instances in which YouTube was quicker, more consistent or more decisive than Facebook and Twitter.
“Through the years, we’ve established and announced many policies to remove content violating our Community Guidelines,” YouTube spokesperson Farshad Shadloo said in a statement. “At times, this work has allowed us to be the first to quickly enforce our policies, including removing covid-19 anti-vaccination content or presidential election misinformation.” Shadloo also said the platform has been working to engage more with the public, citing as evidence a 2020 interview that YouTube chief executive Susan Wojcicki granted to the New York Times and a series of 2019 corporate blog posts on “The Four R’s of Responsibility.”
YouTube published a blog post on Wednesday, following requests for comment by The Washington Post, outlining its philosophy on misinformation. In it, chief product officer Neal Mohan argued that what YouTube’s algorithms amplify is more important than what its moderators leave up or take down.
“The most important thing we can do is increase the good and decrease the bad,” Mohan wrote. “That’s why at YouTube we’re ratcheting up information from trusted sources and reducing the spread of videos with harmful misinformation.”
That follows Wojcicki publishing an op-ed in the Wall Street Journal on content policy and Mohan giving an interview to the tech blog the Verge, both earlier this month. Those moves suggest YouTube may be opening up a bit at last.
Still, researchers say that YouTube remains one of the hardest social media platforms to study, because video is so difficult to analyze in bulk and the platform doesn’t provide many tools to do so.
No YouTube executive has ever testified before Congress, something Twitter and Facebook’s leaders have done multiple times. YouTube will tell you it’s because its executives have not been explicitly invited; Google CEO Sundar Pichai and Google’s then-general counsel Kent Walker have each testified, while in 2018 the company was represented by an empty chair after declining to send Larry Page, the chief of Google’s parent company, Alphabet.
YouTube was the only one of the three major platforms that declined to create new policies in anticipation of efforts to discredit or overturn the 2020 U.S. presidential election. Mohan told the New York Times that its usual processes would be sufficient. As a result, YouTube went unmentioned in some stories about the role social media would play. Mohan’s claim was quickly disproved, as YouTube became a major source of lies and conspiracy theories about the vote, The Post reported. For example, a clip from One America News falsely claimed that Democrats had stolen a “decisive victory” from Trump by “tossing Republican ballots.”
Those continued to flourish in the election’s wake, according to the New York Times, because YouTube’s rules prohibited videos misleading people about the voting process, but not those spreading false stories about the outcome.
YouTube was also the last of the three major platforms to suspend Trump after the Jan. 6 Capitol riot. Facebook indefinitely suspended him on Jan. 7, sparking a firestorm. On Jan. 8, Twitter issued a permanent ban, setting off its own major news cycle. YouTube issued its indefinite suspension the next week, as the furor over the first two was dying down. The platform said it was simply sticking to its long-standing “three strikes” policy for suspending creators who violate its rules, whereas Facebook and Twitter were making their Trump calls ad hoc.
[How viral videos helped blast voting lies across the Web]
In the time since, however, YouTube has stood out for the opacity of its process for reinstating Trump. Facebook has deputized a high-profile, quasi-legal Oversight Board to publicly review and critique its Trump ruling. Twitter has stuck with its permanent ban. YouTube has been largely silent, save for Wojcicki saying in March that YouTube would reinstate Trump’s channel once it deems the “elevated risk of violence” to have subsided. YouTube has declined to specify just how it will assess that, though Shadloo said it’s monitoring a range of factors, such as violent rhetoric across platforms, government security alerts and elevated law enforcement presence.
YouTube’s reticence on sensitive political issues dates to at least the aftermath of the 2016 U.S. presidential election.
In October 2017, Facebook was the first of the platforms to discover and disclose evidence of Russian interference on its platform. While Facebook was being pilloried as a destroyer of democracy, Google told reporters it had found no evidence of Russian ads on its site. A month later, as Google faced pressure from Congress, The Post reported that it had in fact found Russian-bought ads.
Douek said that while she can only speculate as to YouTube’s motives, she suspects that its penchant for avoiding headlines is at least partly intentional.
“An advantage of that approach is that Facebook and Twitter lead the cycle,” whereas YouTube “has the benefit of seeing the public reaction to the story and seeing how some of the discourse works” before making its own call.
But Douek added some of that is because of predetermined bias on the part of both the media and Congress, perhaps in part because journalists and public officials use Facebook and Twitter more than YouTube themselves.
“YouTube is playing into a dynamic that already exists,” she added.
The former employee who worked on YouTube policy said that the tactic of waiting for another company to go first, at least in some cases, was purposeful.
“What you see is one company will take the lead, and then everyone drafts off of that statement to avoid getting their own news cycle,” the person said. “The reason this persists is because it is a very effective PR strategy.”
YouTube’s Shadloo disputed the notion that it strategically avoids controversies. He said the platform is “always open to feedback” and continues to “explore ways to expand transparency and engage with the wider public and community in meaningful ways.” Shadloo noted that YouTube has been a leader among social platforms in promoting authoritative content and reducing the audience of “borderline” content, among other moderation practices.
YouTube was the first platform to remove a doctored video of House Speaker Nancy Pelosi (D-Calif.) in 2019, at a time when Facebook’s policy confused many and Twitter declined to comment. And in 2020, it made what in retrospect appears to have been a wise call to allow content about a controversial New York Post story related to Hunter Biden, while Twitter initially banned it before reversing its decision.
[Using TikTok and Instagram, college students push the science behind covid vaccines]
The same dynamics that have helped YouTube keep a low profile could also benefit TikTok as it becomes a more influential source of news, political views and information, experts said. Like YouTube, its audience skews young, its format poses challenges for researchers, and it remains thought of largely as an entertainment platform even as influencers use it to discuss issues ranging from climate change to coronavirus vaccines.
Stamos said he hopes YouTube’s recent progress is a sign of more to come.
Otherwise, he added, “It sets an unfortunate precedent for the emerging problematic platforms, like TikTok. Why would they offer the same transparency as Twitter when they can get away with Google’s example?”



Will Oremus writes about the ideas, products, and power struggles shaping the digital world for The Washington Post. Before joining The Post in 2021, he spent eight years as Slate's senior technology writer and two years as a senior writer for OneZero at Medium.

No comments:

Post a Comment