The Return of Fake News—and Lessons From Spam | WIRED | xxxThe Return of Fake News—and Lessons From Spam | WIRED – xxx
菜单

The Return of Fake News—and Lessons From Spam | WIRED

五月 1, 2019 - MorningStar

The Return of Fake News—and Lessons From Spam

The Return of Fake News—and Lessons From Spam

The Return of Fake News—and Lessons From Spam | WIRED
Michael Brochstein/Getty Images

The Return of Fake News—and Lessons From Spam

The Return of Fake News—and Lessons From Spam | WIRED
Michael Brochstein/Getty Images

The information ecosystem is broken. Our political conversations are happening on infrastructure—Facebook, YouTube, Twitter—built for viral advertising. The velocity of social sharing, the power of recommendation algorithms, the scale of social networks, and the accessibility of media manipulation technology has created an environment where pseudo events, half-truths, and outright fabrications thrive. Edward Murrow has been usurped by Alex Jones.

Renee DiResta (@noUpside) is an Ideas contributor for WIRED, the director of research at New Knowledge, and a Mozilla fellow on media, misinformation, and trust. She is affiliated with the Berkman-Klein Center at Harvard and the Data Science Institute at Columbia University.

But we’ve known this for a while. Over the past two years, journalists and researchers have assembled an entire lexicon for describing these problems: misinformation, disinformation, computational propaganda. We started having congressional hearings about how algorithms are changing society. And we’ve talked often about filter bubbles on Google, conversational health metrics on Twitter, radicalization on YouTube, and “coordinated inauthentic activity” on Facebook.

In that time, public opinion shifted. People began to feel that tech companies were not just neutral hosts; they bore some responsibility for what their algorithms circulated. Regulators began to debate solutions, from ad disclosures to algorithmic auditing to antitrust. We actually have made material progress on detecting and taking down long-term state-sponsored influence operations. And that’s because, in some ways, those operations were the low-hanging fruit: Americans’ free-speech rights weren’t harmed in the takedown of Russian troll pages.

But now that a video of Speaker Nancy Pelosi—edited to make her appear drunk or disoriented—has reached millions of people, spread in part by tens of thousands of shares on Facebook, we’re back to talking about the original issue that was glaringly obvious during the 2016 election: “fake news.” And we haven’t made much progress there.

Many people have come to conflate fake news with Russian’s influence operations. That’s primarily because those two facets of information disorder rose to public attention nearly simultaneously following the 2016 presidential campaign. But the problems are distinct. The Russian operation was a state-sponsored disinformation campaign: fake accounts using social platforms to spread highly polarizing propaganda (with plenty of memes drawn from homegrown American hyperpartisan media). Some of the content was false, but much of it was not.

“Fake news” was actual false news: stories that were blatantly made up, written and shared by people in the US who were economically or politically motivated. Or, in some cases, by Macedonians seeking a paycheck. While the motives may vary, the product is the same: fictional stories.

Researchers are still debating the extent of the impact on the 2016 election. But the reach is undeniable. “Facebook fake news creator claims he put Trump in White House,” read a CBS News headline in November 2016, describing the work of an American fake-content creator, Paul Horner. Perhaps a statement of bluster—Horner regularly boasted about his reach—but his “hoaxes” (an increasingly inadequate term) received widespread attention on social platforms, were retweeted by prominent political figures, were misclassified as real news by Google Search, and occasionally received coverage in mainstream press. And while Facebook CEO Mark Zuckerberg famously called the idea that fake news could have had an impact on the election “crazy,” he has since recanted.

So: This is not new. But why does the problem persist? To truly understand the challenges and context of “fake news,” it’s important to return to the seminal events of 2016—and to one in particular in which Facebook made precisely the wrong choice, known colloquially in disinformation researcher circles as Conservativegate or Trending Topicsgate.

Trending Topicsgate was a tempest that occurred in May 2016, when a content moderator who worked for Facebook’s Trending Topics feature styled himself a whistle-blower, opened up to Gizmodo, and said that Facebook employees were suppressing conservative news.

Politicians and pundits leapt on it, and Facebook reacted immediately. Tom Stocky, the product manager in charge of Trending Topics, posted an explanation of how the feature worked and disputed the charge, writing “we have found no evidence that the anonymous allegations are true.” The company promptly invited prominent conservative media personalities to its Menlo Park campus to discuss the situation. Glenn Beck was part of the delegation, and the next day published a Medium post saying he was “convinced that Facebook is behaving appropriately and trying to do the right thing.”

There was no evidence to support the claim of overt political bias. The New York Times reported that some Facebook employees, who spoke on condition of anonymity, said that any “suppression” happened “based on perceived credibility—any articles judged by curators to be unreliable or poorly sourced, whether left-leaning or right-leaning, were avoided, though this was a personal judgment call.” In other words, sites that appeared to be spreading viral nonsense were deprecated from Trending, regardless of political leanings.

Nevertheless, Facebook responded to the unsubstantiated allegation by eliminating human curation from Trending Topics entirely; there’s no way to have biased human editors if there are no humans.

This switch to pure algorithmic curation was an unmitigated disaster. Bullshit trended immediately, and regularly: “Megyn Kelly Fired From Fox News!” was a top Trending headline two days after the call was made. Yet Facebook kept Trending Topics for two more years—providing fodder for many more stories about loony trends—before killing the feature in June 2018. Facebook’s announcement downplayed the feature’s controversial history. “We’re removing Trending soon to make way for future news experiences on Facebook,” the company wrote.

Now, three years after that watershed Gizmodo story and the tempest that followed, we’re at a similar juncture—but with few lessons learned. The question of how social networks should handle the edited Pelosi video made it all the way to Anderson Cooper 360, in which Cooper had an awkward eight-minute exchange with Monika Bickert, Facebook’s VP for product policy and counterterrorism. Bickert tried to explain the company’s decision to continue hosting the video, but came across as evasive and inconsistent. That’s because setting policy around fake information that’s not seeded by a hostile state actor or a spam page remains an issue platforms are still deciding how to handle. YouTube chose to remove the video; Facebook chose to leave it up, and to leverage the “inform” approach (from its “remove, reduce, inform” framework). Hyperpartisan content is a political minefield—allegations of censorship are constant—and so what we see are extremely reactive, very ad-hoc solutions. Meanwhile, there’s still very little transparency for outside researchers to see what’s spreading, or how. And the rise of deepfakes—wholly AI-generated content, as opposed to the Pelosi video’s edited clips from real (fact-checkable) speeches—will only make this problem more thorny and more urgent.

Can we devise solutions that aren’t reactive and ad hoc, and aren’t bogged down by accusations of partisan bias? One idea is to treat fake news as a distribution problem, treating it more like spam. Spam is something the platforms already understand and deal with. In the early days of the web, industry players introduced tools like DNS blacklists. The Spamhaus Project has been compiling anti-spam registries for over a decade. Back then, it wasn’t controversial to suggest that we could use signals to determine whether or not a domain was low quality. There was a consensus that not all information was worth pushing directly into people’s inboxes. Today, the vast majority of email that’s clearly crap is stopped at the source—and no one mourns the free speech rights of spammers. Content that is borderline makes it into a designated Spam folder, where masochists can read through it if they choose. And legitimate companies that use spammy email marketing tactics are penalized, so they’re incentivized to be on their best behavior.

Examining distribution more closely allows for a balance of free expression and a healthier information ecosystem. It’s debatable whether Facebook was right to leave up the Pelosi video, though it’s becoming clearer that, much like in past examples, the creator coordinated distribution across several sites he managed. The video likely should never have gone viral in the first place. Once it did, it should have been clearly and unambiguously been labeled an edited video, in-platform—not with an interstitial telling people they can go read an article for more information. In a Lawfare essay, the authors put it eloquently: “It may be best to embrace a more-aggressive combination of demotion and flagging that allows the content to stay posted, yet sends a much louder message than the example set by Facebook.”

This isn’t a perfect parallel—as others have pointed out, users want to see the content, and it doesn’t address the problems inherent in engagement-based business models. There’s still a lot left to sort out. What is the distinction between a guerrilla marketer and an unethical spammer? Where does clicktivism end and algorithmic gaming begin? These are hard questions for the industry, and they require collaborative solutions. But it’s been three years, and we have to tackle this. As former president Barack Obama said when fake news was spreading during the 2016 campaign, “If we are not serious about facts and what’s true and what’s not, if we can’t discriminate between serious arguments and propaganda, then we have problems.” 2020 is approaching…


More Great WIRED Stories

Related Video

Security

How to Protect Yourself After a Massive Corporate Hack

It seems like every time you turn around there's a new breach of personal information. Follow these steps to minimize the damage.


Notice: Undefined variable: canUpdate in /var/www/html/wordpress/wp-content/plugins/wp-autopost-pro/wp-autopost-function.php on line 51