Commentary on Political Economy

Friday 16 October 2020

HOW I-DENTITY POLITICS WILL LEAD TO FASCISM

The Problem of Free Speech in an Age of Disinformation

This summer, a bipartisan group of about a hundred academics, journalists, pollsters, former government officials and former campaign staff members convened for an initiative called the Transition Integrity Project. By video conference, they met to game out hypothetical threats to the November election and a peaceful transfer of power if the Democratic candidate, former Vice President Joe Biden, were to win. Dividing into Team Trump and Team Biden, the group ran various scenarios about counting ballots and the litigation and protests and violence that could follow a contested election result. The idea was to test the machinery of American democracy.

Describing the results in a Sept. 3 essay in The Washington Post, one of the project’s organizers, Rosa Brooks, a Georgetown law professor and Pentagon official during the Obama administration, mentioned a situation in which Biden won the popular vote but lost in the Electoral College. In that hypothetical case, “desperate Democrats” on Team Biden considered encouraging California and the Pacific Northwest to threaten secession to pressure Republicans to expand the size of the Senate.

The next day, Michael Anton, a former national security adviser to President Trump, published an article about the Transition Integrity Project called “The Coming Coup?” Democrats were “laying the groundwork for revolution,” Anton wrote without evidence in The American Mind, a publication of the Claremont Institute. He warned that ballots harvested “lawfully or not” could tip close states to Biden.

Listen to This Article

Audio Recording by Audm
Listen 1:03:26

By mid-September, Anton’s article was one of the most-shared links in extremist online communities, according to a newsletter published by the Institute for Strategic Dialogue, a think tank based in London. Dan Bongino, a podcaster and Trump supporter, covered Anton’s essay and the imagined coup in several videos, with one tagged, “They are telling you what they are going to do!” Just two of the videos pulled in at least six million views.

On Sept. 9, a post appeared on Revolver News, a new right-wing website. It claimed without evidence that one participant in the Transition Integrity Project, Norm Eisen, who served as a counsel for the Democrats on the House Judiciary Committee during the impeachment proceedings, was a “central operative” in a “color revolution” against Trump, a term for uprisings that have toppled governments in countries like Georgia and Ukraine. Trump tweeted in praise of Revolver News a few days later.

On Sept. 15, the Fox News host Tucker Carlson had on his show Darren Beattie, a former Trump speechwriter who was fired after reports surfaced that he had attended a gathering of white nationalists in 2016 and who warned about Eisen and a color revolution. Two days later, Trump tweeted that “the Nov 3rd Election result may NEVER BE ACCURATELY DETERMINED, which is what some want,” generating tens of thousands of interactions on Twitter and a round of news coverage about one of the fears that the Transition Integrity Project sought to address — that Trump could refuse to accept the results of the election.

All told, in September the coup fabrication was shared more than 100,000 times from public Facebook pages, generating many millions of interactions and video views, according to the data source CrowdTangle. Alongside Bongino and Fox News, there were small drivers of traffic like Long Islanders for Trump, the Silent Majority Group and a county Republican organization in Oregon, as well as private groups with thousands of members that CrowdTangle doesn’t capture. By the end of the month, the fraction of Republicans who were not “confident” that the election “will be conducted in a fair and equal way” hit 65 percent, higher than it was for independents or Democrats, in an NBC News/SurveyMonkey tracking poll. This month, Trump retweeted a response to a Republican member of Congress, Mark Green, who suggested that Speaker Nancy Pelosi could stage a coup.

The United States is in the middle of a catastrophic public-health crisis caused by the spread of the coronavirus. But it is also in the midst of an information crisis caused by the spread of viral disinformation, defined as falsehoods aimed at achieving a political goal. (“Misinformation” refers more generally to falsehoods.) Seven months into the pandemic in America, with Trump leading the way, coronavirus skeptics continue to mock masks and incorrectly equate the virus with the flu. Throughout the campaign season, Trump and other Republicans have promoted a false narrative of widespread voter fraud, which Attorney General William Barr, as the country’s top law-enforcement official, furthered in a September interview on CNN when he said someone in Texas was indicted for filling out 1,700 ballots for other people, which never happened. As fires tore through California and the Pacific Northwest last month, the president cast doubt on the science behind global warming, and people in Oregon defied evacuation orders because of false rumors that antifa, a loose term for left-wing activists, was setting the blazes and looting empty homes.

The conspiracy theories, the lies, the distortions, the overwhelming amount of information, the anger encoded in it — these all serve to create chaos and confusion and make people, even nonpartisans, exhausted, skeptical and cynical about politics. The spewing of falsehoods isn’t meant to win any battle of ideas. Its goal is to prevent the actual battle from being fought, by causing us to simply give up. And the problem isn’t just the internet. A working paper from the Berkman Klein Center for Internet and Society at Harvard released early this month found that effective disinformation campaigns are often an “elite-driven, mass-media led process” in which “social media played only a secondary and supportive role.” Trump’s election put him in the position to operate directly through Fox News and other conservative media outlets, like Rush Limbaugh’s talk-radio show, which have come to function “in effect as a party press,” the Harvard researchers found.

The false story about Democrats plotting a coup spread through a typical feedback loop. Links from Fox News hosts and other right-wing figures aligned with Trump, like Bongino, often dominate the top links in Facebook’s News Feed for likes, comments and shares in the United States. Though Fox News is far smaller than Facebook, the social media platform has helped Fox attain the highest weekly reach, offline and online combined, of any single news source in the United States, according to a 2020 report by the Reuters Institute.

It’s an article of faith in the United States that more speech is better and that the government should regulate it as little as possible. But increasingly, scholars of constitutional law, as well as social scientists, are beginning to question the way we have come to think about the First Amendment’s guarantee of free speech. They think our formulations are simplistic — and especially inadequate for our era. Censorship of external critics by the government remains a serious threat under authoritarian regimes. But in the United States and other democracies, there is a different kind of threat, which may be doing more damage to the discourse about politics, news and science. It encompasses the mass distortion of truth and overwhelming waves of speech from extremists that smear and distract.

This concern spans the ideological spectrum. Along with disinformation campaigns, there is the separate problem of “troll armies” — a flood of commenters, often propelled by bots — that “aim to discredit or to destroy the reputation of disfavored speakers and to discourage them from speaking again,” Jack Goldsmith, a conservative law professor at Harvard, writes in an essay in “The Perilous Public Square,” a book edited by David E. Pozen that was published this year. This tactic, too, may be directed by those in power. Either way, it’s often grimly effective at muting critical voices. And yet as Tim Wu, a progressive law professor at Columbia, points out in the same book, the “use of speech as a tool to suppress speech is, by its nature, something very challenging for the First Amendment to deal with.”

These scholars argue something that may seem unsettling to Americans: that perhaps our way of thinking about free speech is not the best way. At the very least, we should understand that it isn’t the only way. Other democracies, in Europe and elsewhere, have taken a different approach. Despite more regulations on speech, these countries remain democratic; in fact, they have created better conditions for their citizenry to sort what’s true from what’s not and to make informed decisions about what they want their societies to be. Here in the United States, meanwhile, we’re drowning in lies.

Facts and transparency are the intended pillars of the modern First Amendment. Since the nation’s founding, the Constitution has guaranteed that the government “shall make no law” abridging “the freedom of speech, or of the press; or the right of the people peaceably to assemble.” For more than a century, however, these limits on the state’s power were worth little. From 1798 to 1801, more than two dozen people, including several newspaper editors, were prosecuted by the administration of President John Adams under the Alien and Sedition Acts, which made “malicious writing” a crime. Protesters were also jailed for criticizing the government during World War I.

In 1919, Justice Oliver Wendell Holmes Jr. invoked the First Amendment to dispute the legality of prosecuting five anarchists for distributing leaflets that called for workers to strike at munitions factories. “The ultimate good desired is better reached by free trade in ideas,” Holmes wrote.

Justice Oliver Wendell Holmes Jr. helped establish modern American free-speech protections with his position on the “free trade in ideas.”
Justice Oliver Wendell Holmes Jr. helped establish modern American free-speech protections with his position on the “free trade in ideas.” Credit... Everett/Bridgeman Images

One of Holmes’s chief influences was the British philosopher John Stuart Mill, who argued in his foundational 1859 treatise “On Liberty” that it is wrong to censor ideas, because knowledge arises from the “the clearer perception and livelier impression of truth, produced by its collision with error.” In the process, the capacity of citizens to weigh policy questions is strengthened. The government should not censor false or harmful speech because its judgment might be wrong.

Based on Mill’s conception of free speech, the political theorist Alexander Meiklejohn argued for elevating the right above other rights, as the foundation of democracy, in his 1948 book “Free Speech and its Relation to Self-Government.” Mill and Meiklejohn stand for the proposition that unfettered debate — Holmes’s “free trade in ideas,” or the “marketplace of ideas,” coined by Justice William O. Douglas in 1953 — furthers the bedrock values of the pursuit of truth, individual autonomy and democratic self-governance.

Keep up with Election 2020

Sign up for election alerts in our app

In the 1960s, based on these principles, Supreme Court majorities laid the cornerstones of modern American free-speech protections. In the case Brandenburg v. Ohio, the justices struck down an Ohio law used to arrest a Ku Klux Klan leader for speaking at a rally, barring the government from punishing speech unless it encouraged and was likely to cause “imminent lawless action,” like a riot. In the foundational case New York Times v. Sullivan, the court made it difficult for a public figure to sue a newspaper for libel that included false statements. Errors were “inevitable in free debate,” the court said, and “must be protected if the freedoms of expression are to have the ‘breathing space’ that they ‘need,’” quoting a previous ruling.

It’s a fundamentally optimistic vision: Good ideas win. The better argument will prove persuasive.

There’s a countertradition, however. It’s alert to the ways in which demagogic leaders or movements can use propaganda, an older term that can be synonymous with disinformation. A crude authoritarian censors free speech. A clever one invokes it to play a trick, twisting facts to turn a mob on a subordinated group and, in the end, silence as well as endanger its members. Looking back at the rise of fascism and the Holocaust in her 1951 book “The Origins of Totalitarianism,” the political philosopher Hannah Arendt focused on the use of propaganda to “make people believe the most fantastic statements one day, and trust that if the next day they were given irrefutable proof of their falsehood, they would take refuge in cynicism.”

The political philosopher Hannah Arendt argued that political propaganda can outcompete the truth.
The political philosopher Hannah Arendt argued that political propaganda can outcompete the truth. Credit... Library of Congress/Getty Images

In other words, good ideas do not necessarily triumph in the marketplace of ideas. “Free speech threatens democracy as much as it also provides for its flourishing,” the philosopher Jason Stanley and the linguist David Beaver argue in their forthcoming book, “The Politics of Language.”

Concerns about the harm of unfettered speech have flared on the left in the United States since the 1970s. In that decade, some feminists, led by the legal scholar Catharine A. MacKinnon and the activist Andrea Dworkin, fought to limit access to pornography, which they viewed as a form of subordination and a violation of women’s civil rights. In the 1980s and ’90s, scholars developing critical race theory, which examines the role of law in maintaining race-based divisions of power, called for a reading of the First Amendment that recognized racist hate speech as an injury that courts could redress.

But the Supreme Court has strongly protected hate speech. In 1992, the Supreme Court unanimously said that the City of St. Paul could not specially punish, as a hate crime, the public burning of a cross or the display of a swastika. In 2011, in an 8-to-1 vote, the court said the government could not stop members of the Westboro Baptist Church in Kansas from picketing military funerals across the nation to protest what they perceived to be the government’s tolerance of homosexuality by holding signs like “Thank God for Dead Soldiers.” Speech can “inflict great pain,” Chief Justice John G. Roberts Jr. wrote for the majority. “On the facts before us, we cannot react to that pain by punishing the speaker. As a Nation we have chosen a different course — to protect even hurtful speech on public issues to ensure that we do not stifle public debate.”

In 2012, by a 6-to-3 vote in United States v. Alvarez, the court provided some constitutional protection for an individual’s intentional lies, at least as long as they don’t cause serious harm. The majority said that the “mere potential” for government censorship casts “a chill the First Amendment cannot permit if free speech, thought and discourse are to remain a foundation of our freedom.”

The Supreme Court has also taken the First Amendment in another direction that had nothing to do with individual rights, moving from preserving a person’s freedom to dissent to entrenching the power of wealthy interests. In the 1970s, the court started protecting corporate campaign spending alongside individual donations. Legally speaking, corporate spending on speech that was related to elections was akin to the shouting of protesters. This was a “radical break with the history and traditions of U.S. law,” the Harvard law professor John Coates wrote in a 2015 article published by the University of Minnesota Law School. Over time, the shift helped to fundamentally alter the world of politics. In the 2010 Citizens United decision, the court’s conservative majority opened the door to allowing corporations (and unions) to spend unlimited amounts on political advocacy, as long as they donated to interest groups and political-action committees rather than to campaigns.

A demonstration at the Lincoln Memorial after the Supreme Court’s Citizens United decision in 2010.
A demonstration at the Lincoln Memorial after the Supreme Court’s Citizens United decision in 2010. Credit... Chip Somodevilla/Getty Images

By requiring the state to treat alike categories of speakers — corporations and individuals — the Supreme Court began to go far beyond preventing discrimination based on viewpoint or the identity of an individual speaker. “Once a defense of the powerless, the First Amendment over the last hundred years has mainly become a weapon of the powerful,” MacKinnon, now a law professor at the University of Michigan, wrote in “The Free Speech Century,” a 2018 essay collection. Instead of “radicals, artists and activists, socialists and pacifists, the excluded and the dispossessed,” she wrote, the First Amendment now serves “authoritarians, racists and misogynists, Nazis and Klansmen, pornographers and corporations buying elections.” In the same year, Justice Elena Kagan warned that the court’s conservative majority was “weaponizing the First Amendment” in the service of corporate interests, in a dissent to a ruling against labor unions.

If Trump’s deeply conservative third Supreme Court nominee, Amy Coney Barrett, is confirmed, the court will most likely become more committed to its path of using the First Amendment to empower corporations. Somewhere along the way, the conservative majority has lost sight of an essential point: The purpose of free speech is to further democratic participation. “The crucial function of protecting speech is to give persons the sense that the government is theirs, which we might call democratic legitimation,” says the Yale law professor Robert Post. “Campbell Soup Company can’t experience democratic legitimation. But a person can. If we lose one election, we can win the next one. We can continue to identify with the democratic process so long as we’re given the opportunity to shape public opinion. That’s why we have the First Amendment.”

On May 16, 2017, Fox News posted an article that drew on a report from the local Fox station in Washington, laying out a conspiracy theory about the death of Seth Rich, a staff member at the Democratic National Committee who was apparently the victim of an attempted street robbery. The story falsely implicated Rich in the Russian hacking of committee emails, which were released by WikiLeaks during the 2016 presidential campaign. Sean Hannity amplified the lies about Rich on his Fox News show that night and the former House speaker Newt Gingrich repeated them on “Fox & Friends” a few days later. The falsehoods spread to conspiracy websites and social media. Fox News retracted its false report online a week later, but “Fox & Friends” did not; Hannity said on his radio show, “I retracted nothing.” An ABC affiliate owned by the Sinclair Broadcast Group, a conservative owner of local TV stations, then aired another report on the Rich conspiracy theory, which the local Fox station covered, giving it life for another news cycle.

In a 2018 book, “Network Propaganda,” Yochai Benkler, a director of the Berkman Klein Center at Harvard, and two researchers there, Robert Faris and Hal Roberts, mapped the spread of political disinformation in the United States from 2015 to 2018. Analyzing the hyperlinks of four million news articles, the three authors found that the conservative media did not counter lies and distortions, but rather recycled them from one outlet to the next, on TV and radio and through like-minded websites.

The dearth of competition for factual accuracy among conservative outlets leaves their audiences vulnerable to disinformation even if the mainstream news media combats it. People are more likely to believe fact-checking from a source that speaks against its apparent political interest, research shows. In the eyes of many conservatives, news outlets like The Washington Post, The New York Times and CNN do not fill that role when they challenge a story that Trump and Fox News promote.

Mainstream publications also make mistakes or run with a hyped narrative. The repeated front-page coverage that The New York Times gave to Hillary Clinton’s use of a private email server, after breaking the story, shadowed her defeat in 2016. It was also skewered by press critics — an example of how competing outlets challenge and correct one another (even if the system sometimes fails in real time). This “reality-check dynamic” in the mainstream and left-leaning media, Benkler, Faris and Roberts write, “still leaves plenty of room for partisanship.” But the standards of journalism, however flawed, appear to “significantly constrain disinformation.”

In the past, ensuring a vibrant free press made up of competing outlets was an express aim of federal policy. From the founding until the early 20th century, Congress lowered the cost of starting and running a newspaper or magazine by setting low postage rates for mailed copies. The advent of radio raised questions about how to foster competition and public access. “Lawmakers of both parties recognized the danger that an information chokehold poses to democratic self-government,” says Ellen P. Goodman, a law professor at Rutgers University. “So policymakers adopted structures to ensure diversity of ownership, local control of media and public broadcasting.”

In 1927, when Congress created the licensing system for exclusive rights to the broadcast spectrum, so that radio broadcasters could secure a place on the dial, lawmakers told broadcasters to act “as if people of a community should own a station.” The 1934 Communications Act similarly required anyone with a broadcast license to operate in the “public interest” and allocated spectrum based on ensuring that local communities had their own stations. In 1949, the Federal Communications Commission established the fairness doctrine, which interpreted operating in the public interest to require broadcasters to cover major public-policy debates and present multiple points of view. And in 1967, Congress created and funded the Corporation for Public Broadcasting, whose mission is to “promote an educated and informed civil society,” and reserved broadcast spectrum for local NPR and PBS stations.

During these decades, broadcasters were held to a standard of public trusteeship, in which the right to use the airwaves came with a mandate to provide for democratic discourse. Broadcasters made money — lots of it — but profit wasn’t their only reason for existing. “The networks had a public-service obligation, and when they went to get their licenses renewed, the news divisions fulfilled that,” says Matthew Gentzkow, an economist at Stanford who studies trust in information. The model coincided with a rare period, in American history, of relatively high levels of trust in media and low levels of political polarization.

But public trusteeship for broadcast and diverse ownership began to unravel with the libertarian shift of the Reagan era. In the mid-1980s, the administration waived the F.C.C. rule that barred a single entity from owning a TV station and a daily newspaper in the same local market to allow Rupert Murdoch to continue to own The New York Post and The Boston Herald after he bought his first broadcast TV stations in New York and Boston.

The F.C.C. repealed the fairness doctrine, which had required broadcasters to include multiple points of view, in 1987. “When that went, that was the beginning of the complete triumph, in media, of the libertarian view of the First Amendment,” the Rutgers law professor Goodman says.

Murdoch and Roger Ailes, a former Nixon campaign adviser, started Fox News as the first TV network to cultivate a conservative audience in 1996. A decade later, studies showed what has become known as the Fox News Effect: After a local cable system adds Fox News to the lineup, voters in the vicinity tend to shift toward Republican candidates. As Trump’s ally and frequent platform, Fox News can help shift its audience’s behavior toward his views even when they may risk public health. In a study this year, a team of economists, controlling for other factors, found that communities with higher numbers of Fox News viewers were less likely to comply with stay-at-home orders to fight coronavirus.

Rupert Murdoch was able to own TV stations and daily newspapers in the same local market after a libertarian shift under the Reagan administration in the mid-1980s.
Rupert Murdoch was able to own TV stations and daily newspapers in the same local market after a libertarian shift under the Reagan administration in the mid-1980s. Credit... Bettmann/Getty Images

In the early ’90s, David D. Smith, a conservative who inherited the Sinclair Broadcast Group from his father, bought a second local TV station in Pittsburgh, despite a federal regulation barring the ownership of more than one station in a local market. In Baltimore, Sinclair got around the same rule by creating another company, Glencairn, controlled by Smith’s mother and an employee. Sinclair is growing as local journalism is hollowing out: About 1,800 metro and community newspapers have closed or merged since 2004. Sinclair is now the largest station owner in swing states.

Sign up to receive our On Politics newsletter, a daily guide to the political news cycle.

More than three-quarters of Americans say they trust local TV news, according to a recent survey by the Poynter Institute. Sinclair owns local affiliates of CBS, ABC, NBC and the CW, as well as Fox, so its partisan leanings aren’t immediately apparent. But they’re there. “We are here to deliver your message — period.” Smith reportedly told Trump during the 2016 campaign. In early 2018, dozens of Sinclair newscasters across the country echoed Trump’s diatribes against the press by reading from the same script warning of “fake stories” from “some members” of the media. (Deadspin captured the repetition of the script in an eerie video montage.) In July, Sinclair released online an interview with Judy Mikovits, a conspiracy theorist who has accused Dr. Anthony Fauci of manufacturing the coronavirus. When the segment drew criticism, the company canceled the planned on-air broadcast but called itself “a supporter of free speech and a marketplace of ideas and viewpoints, even if incredibly controversial.”

The founding ethos of the internet was to treat sources of information equally. Cut loose from traditional gatekeepers — the publishing industry and the government — the web would provide the world’s first neutral delivery of content. But in short order, the libertarian principles that weakened media regulation allowed a few American tech companies to become the new gatekeepers. The United States gave platforms like Google, Facebook and Twitter free rein to grow. Google bought YouTube. Facebook bought Instagram and WhatsApp.

The business model for the dominant platforms depends on keeping users engaged online. Content that prompts hot emotion tends to succeed at generating clicks and shares, and that’s what the platforms’ algorithms tend to promote. Lies go viral more quickly than true statements, research shows.

In many ways, social media sites today function as the public square. But legally speaking, internet platforms can restrict free speech far more than the government can. They’re like malls, where private owners police conduct. Facebook, YouTube and Twitter have guidelines that moderate content that could drive away users, including spam and pornography, and also certain forms of harassment, hate speech, fake engagement or misrepresentation and violent extremism. But for years, the companies enforced these rules subjectively and unevenly — allowing for explosions of anti-Semitic memes and targeted harassment of women, for example.

Mark Zuckerberg of Facebook and Jack Dorsey of Twitter have each said that their sites cannot be “arbiters of truth” and make important exceptions to their guidelines. Facebook leaves up content, including hate speech, that breaks the rules when it decides it’s newsworthy, because it’s a post from a politician or a public figure. “In the same way that news outlets will report what a politician says,” Zuckerberg said in a Facebook post in June, “we think people should generally be able to see it for themselves on our platforms.”

Social media sites have leaned on First Amendment principles to keep secret the identities of people who appear to abuse their services. Following the right-wing news coverage of the conspiracy theory about Seth Rich, his brother subpoenaed Twitter, in a defamation suit against media companies, to uncover the name of the person behind the Twitter account @whysprtech, alleging that person sent to Fox News a forged F.B.I. document about Rich’s case. Twitter fought back in court, saying that unmasking @whysprtech would chill speech by violating what the platform’s lawyers called a constitutional right to be anonymous. This month, a judge ordered Twitter to reveal information that could unmask the person or people behind @whysprtech.

Over the past two months, as Trump attacked mail-in voting and the validity of the November election results, Facebook, YouTube and Twitter said they would impose a few more controls on speech about voting. The platforms expanded or reaffirmed their policies for removing a narrow category of content that misleads people about how to vote — for example, by saying you can fill out a ballot online.

In September, Facebook and YouTube joined Twitter in adding labels to content that a fact check has noted could undermine the results of the election or mislead about the results. (Facebook contracts with an independent fact-checking network, which includes both The Associated Press and Check Your Fact, a subsidiary of the right-wing outlet The Daily Caller. Twitter does fact-checking internally. YouTube relies on a network of news organizations, including PolitiFact and The Washington Post Fact Checker.)

Fact-checking and labeling are First Amendment-friendly responses. They counter false speech with more speech, at the initiative of a private company, not the direction of the government. Today the research consensus among social scientists is that some fact-checking methods significantly reduce the prevalence of false beliefs. In print or on TV, journalists can use headlines or chyrons to provide context and debunking in real time — though they sometimes fail to do so.

Until very recently, Facebook and Twitter used mild labeling language. On Sept. 28, Trump tweeted: “The Ballots being returned to States cannot be accurately counted. Many things are already going very wrong!” In small blue print at the bottom of the post, Twitter added a warning symbol — a small exclamation point in a circle — along with the text “Learn how voting by mail is safe and secure.” Facebook labeled the same post, suggesting that voters visit its “Voting Information Center” without including a warning symbol.

Kate Starbird, a professor of human-computer interaction at the University of Washington who tracks social media disinformation, called Facebook’s label “worse than nothing.” Adding a weak label to a Trump post mostly has the effect of “giving it an attention bump by creating a second news cycle about Republican charges of bias in content moderation,” says Nathaniel Persily, a Stanford law professor and co-director of the university’s Program on Democracy and the Internet.

Facebook has since updated its labels, based on tests and feedback, including from civil rights leaders. “The labels we have now, we have far more than we used to,” says Monika Bickert, Facebook’s vice president for content policy. “They’ve gotten stronger. But I would expect we’ll continue to refine them as we keep seeing what’s working.” Facebook updated the label on Trump’s Sept. 28 tweet to “Both voting in person and voting by mail have a long history of trustworthiness in the US and the same is predicted this year. Source: Bipartisan Policy Center.” On an Oct. 6 Trump post with more falsehoods about voting, Facebook added an additional sentence to that label: “Voter fraud is extremely rare across voting methods.” (Other labels, though, remain mild, and plenty of misleading content related to voting remains unlabeled.)

Angelo Carusone, the president of Media Matters for America, a nonprofit media watchdog group, finds the changes useful but frustratingly late. “We went from them refusing to touch any of the content, an entire ocean of disinformation on voting and election integrity, and dismissal of any efforts to address that — to this. They let it metastasize, and now they start doing the thing they could have done all along.” Carusone also points out that independent researchers don’t have access to data that would allow them to study key questions about the companies’ claims of addressing disinformation. How prevalent are disinformation and hate speech on the platforms? Are people who see Facebook, Twitter and YouTube’s information labels less likely to share false and misleading content? Which type of warning has the greatest impact?

Twitter and Facebook reduce the spread of some false posts, but during this election season, Starbird has watched false content shared or retweeted tens of thousands of times or more before companies make any visible effort to address it. “Currently, we are watching disinformation go viral & trying desperately to refute it,” she tweeted in September. “By the time we do — even in cases where platforms end up taking action — the false info/narrative has already done its damage.”

Facebook came under intense criticism for the role it played in the last presidential race. During the 2016 campaign, Facebook later reported, Russian operatives spent about $100,000 to buy some 3,000 ads meant to benefit Trump largely by sowing racial division. By choosing Facebook, a small investment had an outsize payoff as the site’s users circulated the planted ads to their followers. “Facebook’s scale means we’ve concentrated our risk,” says Brendan Nyhan, a political scientist at Dartmouth College. “When they’re wrong, they’re wrong on a national or global scale.”

Facebook and YouTube have treated political ads as protected speech, allowing them to include false and misleading information. Online ads — like direct mail and robocalls — can make setting the record straight very difficult. Online advertisers can use microtargeting to pinpoint the segments of users they want to reach. “Misleading TV ads can be countered and fact-checked,” while a misleading message in a microtargeted ad “remains hidden from challenge by the other campaign or the media,” Zeynep Tufekci, a sociologist at the University of North Carolina at Chapel Hill and the author of the 2017 book “Twitter and Tear Gas,” wrote in a prescient 2012 Op-Ed in The New York Times.

In this election season, domestic groups are adopting similar tactics. This summer, the Trump-aligned group FreedomWorks, which was seeded by the billionaire Koch brothers, promoted 150 Facebook ads directing people to a page with a picture of LeBron James. The image was accompanied by a quote, in which James denounced poll closures as racist, that was repurposed to deceive people into thinking he was discouraging voting by mail. After The Washington Post reported on it, Facebook removed the page for violating its voter-interference policy, but only after the ads were seen hundreds of thousands of times.

Coordinated fake accounts posting about the election have also shown up on Twitter. In August, NBC News reported on a series of viral tweets that appeared to be from Black men who said they were lifelong Democrats and planned to leave the party. The accounts were fake; one used a stock photo of a Black man, and the other used a photo of a Dutch model. Twitter eventually took them down. The company recently said that as of Oct. 20, it is making more changes to protect the election, including temporarily warning users if they try to share content that the platform has flagged as false.

Another reason political ads are controversial online is that campaigns or groups that pay for them don’t have to disclose their identities, as they’re required to do on TV and radio and in print. “The First Amendment value of individual autonomy means we should know who is speaking to us and why,” the Rutgers law professor Goodman argues. But online, neither the Supreme Court nor Congress has stepped in to require disclosure.

Twitter banned political ads a year ago. This month, Facebook said it would temporarily ban political ads after the polls close on Nov. 3. Last month, the company took another step to protect the U.S. election. It restricted its Messenger app by preventing mass forwarding of private messages, which has done terrible damage in other countries. For several years, falsehoods that were forwarded from person to person, and from group to group, in private encrypted messages on WhatsApp sparked riots and fatal beatings against religious and ethnic minorities in countries including Bangladesh, India, Myanmar and Sri Lanka. In 2018, Facebook started limiting the forwarding of any post on WhatsApp to 20 people; now the limit is five for WhatsApp and Messenger.

As social media companies have tried to address the spread of disinformation and other toxic speech, conservatives including Trump have hurled a series of accusations that the companies are showing bias against them. In May, after Twitter first added labels that read “Get the facts about mail-in ballots” to two Trump tweets predicting mass ballot fraud, the president signed a largely symbolic executive order directed at social media sites, calling the platforms’ labels “selective censorship that is harming our national discourse.”

In February, The Washington Post reported on an internal effort by Facebook (called Project P, for propaganda) after the 2016 election to take down pages that spread Russian disinformation. The project foundered after Joel Kaplan, Facebook’s vice president for global public policy, reportedly said at a high-level meeting, “We can’t remove all of it because it will disproportionately affect conservatives,” according to a source at Facebook who spoke to The Post anonymously. In an email this month, a Facebook representative said Kaplan’s point about Project P was that the company “needed a clear basis for the removal because the impact would be felt more on the right than the left, and we would face criticism.”

Kaplan has deep Republican ties. He was present at the so-called Brooks Brothers Riot in Florida shortly after the contested presidential election in 2000, when a group of Republicans in suits succeeded in stopping a recount of ballots to the benefit of their candidate, George W. Bush. In 2018, he sat behind his close friend Brett Kavanaugh during Kavanaugh’s confirmation hearing for the Supreme Court. (Kaplan apologized after some of his employees objected that his appearance seemed like a Facebook endorsement of Kavanaugh.)

Facebook employees have also raised questions about whether Facebook’s misinformation policy is enforced evenhandedly. According to the policy, publications and individual users will receive a “misinformation strike” for a post that a fact checker determines is false or misleading. A publication with multiple misinformation strikes in 90 days is supposed to lose its eligibility to be in Facebook News, a curated section that generates traffic for publications. (The New York Times is in Facebook News.) In August, Buzzfeed reported that at an all-hands meeting the previous month, Facebook employees asked Zuckerberg how Breitbart News remained a news partner after sharing the video in which doctors called hydroxychloroquine “a cure for Covid” and said “you don’t need a mask.” Through Breitbart’s page, the video racked up more than 20 million views in several hours before Facebook removed it. Zuckerberg said Breitbart didn’t have a second strike within the 90-day period.

But in an internal message group, employees wrote that misinformation strikes against Breitbart had been “cleared without explanation,” and gathered evidence of “preferential treatment” to help conservative accounts in these situations, according to Buzzfeed. One of the employees was later fired; Facebook said it was because “he broke the rules.” When I spoke to Bickert, she said Breitbart was cleared by her team because of “glitches” in Facebook’s system, such as not accurately notifying the publisher. This has happened “to publishers on the left and the right,” Bickert said.

In the last two years, employees have left Facebook sounding an alarm. In 2019, Yael Eisenstat resigned from her role as Facebook’s head of elections integrity after failing to persuade the company to combat misinformation in political ads. In a November op-ed in The Washington Post, she called on the company to stop profiting “from providing politicians with potent information-warfare tools.” Resigning from Facebook this summer, two software engineers, Max Wang and Ashok Chandwaney, separately accused the company of “profiting from hatred.” Sophie Zhang, a data scientist who was fired from Facebook in September, wrote a 6,600-word memo with details about disinformation campaigns she found to influence elections in countries including Ecuador, Honduras and Ukraine. “I have blood on my hands,” she wrote.

John Stuart Mill wrote a century and a half ago that “All silencing of discussion is an assumption of infallibility.” There is still plenty of reason to believe that moving away from the American free-speech tradition could make us too quick to dismiss apparently false ideas that turn out to have merit — and that airing them is the only way to find out. At Howard University’s commencement in 2016, President Barack Obama warned students against pushing colleges to disinvite speakers, “no matter how ridiculous or offensive you might find the things that come out of their mouths.” Instead, he told them, “beat them on the battlefield of ideas.”

In the last several years, however, some liberals have lost patience with rehashing debates about ideas they find toxic. The American Civil Liberties Union celebrated its decision in 1977 to defend the free speech rights of Nazis to march in Skokie, Ill. Forty years later, some lawyers and board members for the A.C.L.U. objected when the group defended the neo-Nazis who demonstrated in Charlottesville, Va.

Counterprotesters at a Nazi march in Skokie, Ill., in 1977.
Counterprotesters at a Nazi march in Skokie, Ill., in 1977. Credit... Charles Knoblock/Associated Press
A vigil in Charlottesville, Va., in 2017 after the violence that followed a neo-Nazi demonstration.
A vigil in Charlottesville, Va., in 2017 after the violence that followed a neo-Nazi demonstration. Credit... Salwan Georges/Getty Images

Cancel culture — subjecting people to professional or social penalties for their views — has unsettled universities and workplaces. Liberal students have shouted down conservative speakers including Charles Murray and Christina Hoff Sommers. Conservatives have also condemned speakers and academics, for example, for supporting Palestinian rights. The New York Times’s decision this summer to publish an Op-Ed in which Senator Tom Cotton called for sending in federal troops to crack down on protests against the police roiled the paper’s staff. Citing a “significant breakdown in our editing processes,” the publisher, A.G. Sulzberger, announced the resignation of the editorial-page editor, James Bennet.

The First Amendment doesn’t have a formal role in these situations — newspapers and universities can decide which views they want to promote — but the principle that it’s paramount to protect dissident speech makes them difficult to untangle. If people have the right to peacefully protest against the police, don’t neo-Nazis have the same right to peacefully demonstrate? Why is Tom Cotton’s Op-Ed beyond the pale but not an October Op-Ed by Regina Ip, a legislator in Hong Kong, who defended police officers’ filling the streets and arresting hundreds of pro-democracy demonstrators?

The principle of free speech has a different shape and meaning in Europe. For the European Union, as well as democracies like Canada and New Zealand, free speech is not an absolute right from which all other freedoms flow. The European high courts have allowed states to punish incitements of racial hatred or denial of the Holocaust, for example. Germany and France have laws that are designed to prevent the widespread dissemination of hate speech and election-related disinformation.“Much of the recent authoritarian experience in Europe arose out of democracy itself,” explains Miguel Poiares Maduro, board chairman of the European Digital Media Observatory, a project on online disinformation at the European University Institute. “The Nazis and others were originally elected. In Europe, there is historically an understanding that democracy needs to protect itself from anti-democratic ideas. It’s because of the different democratic ethos of Europe that Europe has accepted more restrictions on speech.”

After World War II, European countries also promoted free speech, and the flow of reliable information, by making large investments in public broadcasting. Today France TV, the BBC, ARD in Germany and similar broadcasters in the Netherlands and Scandinavia continue to score high in public trust and audience share. Researchers in Germany and France who have mapped the spread of political lies and conspiracy theories there say they have found pockets online, especially on YouTube, but nothing like the large-scale feedback loops in the United States that include major media outlets and even the president.

The difference between the political-speech traditions of the United States and Europe was acutely apparent in the American and French presidential elections of 2016 and 2017. When Russian operatives hacked into the computers of the Democratic National Committee, they gave their stolen trove of D.N.C. emails to WikiLeaks, which released the emails in batches to do maximum damage to Clinton and her party in the months before the election. The news media covered the stolen emails extensively, providing information so the public could weigh it, even if a foreign adversary had planted it.

The French press responded otherwise to a Russian hack in May 2017. Two days before a national election, the Russians posted online thousands of emails from En Marche!, the party of Emmanuel Macron, who was running for president. France, like several other democracies, has a blackout law that bars news coverage of a campaign for the 24 hours before an election and on Election Day. But the emails were available several hours before the blackout began. They were fair game. Yet the French media did not cover them. Le Monde, a major French newspaper, explained that the hack had “the obvious purpose of undermining the integrity of the ballot.”

Marine Le Pen, Macron’s far-right opponent, accused the news media of a partisan cover-up. But she had no sympathetic outlet to turn to, because there is no equivalent of Fox News or Breitbart in France. “The division in the French media isn’t between left and right,” said Dominique Cardon, director of the Media Lab at the university Sciences Po. “It’s between top and bottom, between professional outlets and some websites linked to very small organizations, or individuals on Facebook or Twitter or YouTube who share a lot of disinformation.” The faint impact of the Macron hack “is a good illustration of how it’s impossible to succeed at manipulation of the news just on social media,” said Arnaud Mercier, a professor of information and political communication at the University Paris 2 Panthéon-Assas. “The hackers needed the sustainment of the traditional media.”

Emmanuel Macron won the 2017 French presidential election despite the Russians posting online thousands of emails from his political party.
Emmanuel Macron won the 2017 French presidential election despite the Russians posting online thousands of emails from his political party. Credit... Bob Edme/Associated Press

The challenge of informing the public accurately about the coronavirus has also played out differently in the U.S. and Europe. In March, the World Health Organization appealed for help with what it called an “infodemic.” Facebook, YouTube, Twitter and others pledged to elevate “authoritative content” and combat misinformation about the virus around the world.

But in August, the global activist group Avaaz released a report showing that conspiracies and falsehoods about the coronavirus and other health issues circulated on Facebook through at least May, far more frequently than posts by authoritative sources like W.H.O. and the Centers for Disease Control and Prevention. Avaaz included web traffic from Britain, France, Germany and Italy, along with the United States, and found that the U.S. accounted for 89 percent of the comments, likes and shares of false and misleading health information. “A lot of U.S.-based entities are actually targeting other countries with misinformation in Italian or Spanish or Portuguese,” said Fadi Quran, the campaign director for Avaaz. “In our sample, the U.S. is by far the worst actor.”

America’s information crisis was not inevitable. Nor is it insoluble. Whatever the Supreme Court does, there’s no legal barrier to increasing the delivery of reliable information. The government, federal or state, could invest in efforts to do exactly that. It could stop the decline of local reporting by funding nonprofit journalism. It could create new publicly funded TV or radio to create more alternatives for media that appeals across the ideological spectrum. The only obstacles to such cures for America’s disinformation ills are political.

Last spring, when Twitter started labeling Trump’s misleading and false tweets about voting fraud, he called for revoking Section 230 of the 1996 Communications Decency Act, which Congress wrote in an early stage of the internet to help it grow. Section 230 effectively makes internet platforms, unlike other publishers, immune from libel and other civil suits for the content they carry. Biden also called for revoking Section 230 in January, citing Facebook for “propagating falsehoods they know to be false.”

Taking away the platforms’ immunity, however, seems like a bad fit for the problems at hand. The threat of being sued for libel could encourage platforms to avoid litigation costs by pre-emptively taking down content once someone challenges it. Some of that content would be disinformation and hate speech, but other material might be offensive but true — a risk of overcensorship.

But there’s another idea with bipartisan support: Make the platforms earn their immunity from lawsuits. The Rutgers law professor Goodman and others have proposed using Section 230 as leverage to push the platforms to be more transparent, for example, by disclosing how their algorithms order people’s news feeds and recommendations and how much disinformation and hate speech they circulate. A quid pro quo could go further, requiring the companies to change their algorithms or identify super-spreaders of disinformation and slow the virality of their posts. To make sure new media sites can enter the market, the government could exempt small start-ups but impose conditions on platforms with tens of millions of users.

Congress, as well as the Justice Department, can also promote competition through antitrust enforcement. In early October, the House Judiciary’s Committee’s Democratic leadership released a 449-page report, based on an extensive investigation, that said Facebook, Google, Amazon and Apple have monopoly power in their markets like that of the “oil barons and railroad tycoons” of the early 20th century. “Because there is not meaningful competition, dominant firms face little financial consequence when misinformation and propaganda are promoted online,” the report stated.

There are plenty of ideas, and bills, floating around Washington that seek to improve the online speech environment — like the giant step of using antitrust law to break up the big tech companies, or medium-size steps like banning microtargeted political ads, requiring disclosure of the ad buyers, making the platforms file reports detailing when they remove content or reduce its spread. But the United States may miss the chance to lead. To fend off regulation and antitrust enforcement, the internet platforms spend millions of dollars on lobbying in Washington. They align their self-interest with a nationalist pitch, warning that curbing America’s homegrown tech companies would serve the interests of Chinese competitors like TikTok.

Europe, however, doesn’t have a stake in the dominance of American tech companies. Policymakers talk about the importance of maintaining the health of their democracies. “We see how the money of advertisers for extreme speech is shifting from the traditional media to digital media,” Věra Jourová, the vice president for values and transparency at the European Commission, told me this summer. “Google and Facebook are the big suckers of this money.” Among other things, Jourová mentioned regulating the platforms’ algorithms. “These issues here are not driven by big money like they are in the U.S., or by regressive ideas as in a state like China,” she said. Maduro of the European Digital Media Observatory has proposed treating the platforms like essential facilities, the European version of public utilities, and subjecting them to more regulation. Senator Elizabeth Warren, the Massachusetts Democrat, has outlined a similar idea in the U.S. It would be a huge shift.

As we hurtle toward the November election with a president who has trapped the country in a web of lies, with the sole purpose, it seems, of remaining in office, it’s time to ask whether the American way of protecting free speech is actually keeping us free. Hannah Arendt finished her classic work on totalitarianism in the early 1950s, after barely escaping Germany with her life, leaving friends and homeland behind. She was a Jewish intellectual who saw the Nazis rise to power by demonizing and blaming Jews and other groups with mockery and scorn. The ideal subject of fascist ideology was the person “for whom the distinction between fact and fiction (i.e. the reality of experience),” Arendt wrote, “and the distinction between true and false (i.e. the standards of thought) no longer exist.” An information war may seem to simply be about speech. But Arendt understood that what was at stake was far more.

Emily Bazelon is a staff writer for the magazine and the Truman Capote fellow for creative writing and law at Yale Law School. Her book “Charged” won the Los Angeles Times Book Prize in the current-interest category and the Silver Gavel book award from the American Bar Association. 

No comments:

Post a Comment