How to Stop Misinformation Before It Gets Shared

Last updated: 04-11-2021

Read original article here

How to Stop Misinformation Before It Gets Shared

Rumors are a type of information cascade — a proposition for belief of topical reference disseminated without official verification.
Advertisement
This tension between speed and accuracy came to define early news reporting. News that was both timely and accurate was incredibly expensive, requiring verified couriers and messengers, known as postal systems. We can still see this holdover in the title of “post” in many newspaper names today.
Early journalists were far from perfect, and many of the first newspapers competed for attention by aggressively peddling false, outrageous, or nakedly partisan stories, gruesome crime coverage in particular. But during the 19th century, some papers slowly matured and professionalized, building reputations for publishing factual narratives, and engendering trust as “objective” news sources.
Through fits and starts, this patchwork system of news-gathering and distribution became the dominant way we empirically verify information before amplifying it. We learned to trust journalists, largely because they fact-check rumors.
The information environment transformed yet again with the emergence of radio, and then television. Although these technologies allowed for unprecedented reach, they still relied on human gatekeepers. Each of these inventions created a new means of determining consensus that centered narrow sources of mostly verified yet selective knowledge. The public, a captive audience, was largely exposed to the same “objective” information.
There were, however, significant downsides: Reporting on powerful authorities, companies, and institutions was often uncritical, particularly if it might cause a conflict with the financial interests of the channel or newspaper. Yet most professional reporters generally adhered to journalistic standards, and the proliferation of blatantly false viral rumors was largely kept to a minimum.
Frictionless Free-for-All
In 10 short years, the internet—and social media in particular—blew the system of journalistic friction to pieces.
Advertisement
First the internet transformed publishing. In the mid-'90s, blogging platforms enabled anyone to publish whatever, whenever, without the critical eye of a journalistic colleague. Publishing was now a democratized, zero-cost endeavor.
When the social networks emerged, distribution and reach were also transformed. Within a decade, hundreds of millions of people found themselves perpetually online in new, targetable, frictionless communities. Groups became digital gathering places for ordinary people, and not gatekeepers, to share information. The single-click Share button turned people into active participants in the distribution and amplification of information. Newsfeeds pushed out bite-size posts to friends, and friends of friends. Curation algorithms used likes and favorites to decide what to showcase, and recommendation engines boosted engaging content even further.
Some viral rumors today obtain greater reach than traditional media broadcasts.
Reduced friction has enabled important new voices to be heard , but it has also led to the rapid spread of significantly impactful viral misinformation. The 2020 election, for example, saw farfetched false narratives about stolen elections and CIA supercomputers going viral within hyperpartisan echo chambers. QAnon grew from a small online conspiracy to a decentralized online cult boasting millions of members, who energetically spread nonsense theories about corporations that the community alleged were involved in child trafficking. The Covid pandemic saw demonstrably, unequivocally false videos like "Plandemic," which espoused numerous lies and conspiracies, reach an audience of millions before platforms decided to take it down.
As the US (and other countries) struggle with a crisis of democracy, public health, and other outgrowths of the information environment, it’s clear that current answers aren’t working. Attempts to stifle viral rumors retroactively through content moderation and takedowns are inadequate. And common scapegoats, like bots and algorithms, commandeer much of the attention in debates over solutions. But the reality is more nuanced: Bots do spread misinformation, but most platforms have since reined in the impact of automation. Recommendation algorithms do influence consumption, but they are not the only dynamic in play.
It’s time for proactive solutions; it’s time to reintroduce the sort of friction that can assist with collective sense-making.
Lies Are Fast. Truth Is Slow
Seneca the Younger apocryphally wrote: “Time discovers truth,” an idiom we still hear today as “time will tell.” Time is a critical component in determining accuracy, allowing more opportunities to filter, assess, and confirm.
Advertisement
Because information is now able to leap between human minds, friction-free, we may need to rethink some of the core “truths” of the modern social web. Chief among these is the paradigm that breaking information must be posted and spread instantaneously. We are operating in an environment in which high-velocity information is a significant driver in the spread of misinformation, falsehoods, and propaganda, particularly because of how it intersects with virality. Researchers from MIT have found that false news spreads further, and faster , than real news.
As we reimagine a more trustworthy social web, we can rethink the relationship between velocity and virality. Low-velocity content can still go viral: a good book we share with our friends, say, or a word-of-mouth recommendation for a film. One way in which we might do this is having a system in which rapidly or broadly spreading content is temporarily throttled by platforms to allow fact-checkers time to assess it. This need not apply to all viral content; it could be tailored topics that are most likely to cause harm: politics, health, or breaking news. It’s a model that other industries use—Wall Street exchanges, for example, use circuit breakers to help the public appropriately digest emerging information to avoid stocks going haywire.
Give Users a Nudge
Stopping conflagrations of high-impact misinformation before they happen shrinks the supply of poor information, and avoids the difficult blowback that comes from heavy-handed content moderation.
Advertisement
A helpful and practical metaphor can be taken from the Nobel Prize-winning work of Daniel Kahneman , whose research discovered two key “systems” in our mental operations. System 1, the fast, instinctive, and emotional; and System 2, the slower, more deliberative, and more logical way of thinking and consuming information. System 1 is prone to biases and mental shortcuts that allow us to make snap decisions, while System 2 helps us with complex and nuanced problems.
Both systems are helpful in our daily lives, but System 1 thrives within digital architecture that prioritizes speed and impulsivity. From clickbait to emotionally arresting, outrage-inducing news, the social web is now built to capitalize on System 1, tilting us all towards the reactive, automatic, and unconscious.
We can use this as a frame for thinking through design changes and frictions that might push people towards System 2, away from emotional shares and towards pro-social and reflective ones. Some of this work has been confirmed by the research of Nicholas Christakis at Yale, as well research on other design frictions improving cognitive decisionmaking . Indeed, many of these nudges are already beginning to be used by tech companies: from interstitial warnings over misleading or false content (famously placed over Trump’s tweets), to prompts alerting people that certain information has been flagged in the past, or that comments are likely to be interpreted as toxic.
Various interventions at Instagram, Twitter, TikTok, and elsewhere have shown that such nudges might fundamentally improve the type of content we see and respond to on the internet. These include things like prompts asking people if they’d like to read the article before retweeting , suggesting a domain is low-quality, or noting that a word used in a comment is generally unproductive for discourse and asking if the author might like to revise. Open design libraries of testable interventions would go far in encouraging adoption across platforms.
Interstitials and warnings can be helpful for reducing the spread of disinformation.
Speeding Up Verification
New tools also show promise in speeding up the rate of verification itself — meeting high-speed mis- and disinformation as it spreads. Several recent studies have yielded encouraging new fact-checking methods: using the crowds to verify or debunk claims far faster than professional fact checkers, with similar levels of accuracy.
Advertisement
Crowdsourcing from a group of 1,128 of users , researchers were able to segment groups as small as 10 individuals online that could accurately determine whether or not an article was false — about as well as professional fact checkers. Supplemented by algorithms, a system like this could be trained to identify fake news at the speed and scale in which it spreads.
Furthermore, open-sourcing these methods of verification so they are auditable and transparent enough to be easily understood might help ease claims of bias and censorship by. An early attempt at this can be seen in Twitter’s Birdwatch , which leverages the community to flag misinformation tweets; the system is new and imperfect, and there are clearly ways that it can be gamed (a problem for any verification system), but it’s an important first attempt.
But Who Determines Truth?
Each of these three interventions requires someone, somewhere to make a determination as to what is true, or what is high quality. This “baseline” truth is a critical piece of the puzzle, yet an increasingly fraught idea to address.
Controlling the narrative will always be contentious, and any system that attempts to fix disinformation will be attacked for partisan bias. Indeed, extreme partisanship is directly associated with sharing fake news. Social media seems to be especially effective at drawing partisan battle-lines around more and more issues, even if the issues are not inherently partisan .
But this is a new manifestation of an age-old problem: How do we verify knowledge? And how might we do it quickly enough to be reliable? Who do we trust in society to establish truth? Here we are wading into tricky epistemological territory, but one with precedent.
SUBSCRIBE
Subscribe to WIRED and stay smart with more of your favorite Ideas writers.
Let’s look at other services we regularly use to verify facts—imperfect but powerful systems we have come to rely upon. Google and Wikipedia have, writ large, built reputations on effectively helping people find accurate information. We generally trust them, because they have systems of verification and sourcing embedded in their design.
The frictionless design of the current social web has undermined the necessary precondition to democratic functioning: shared truths.
Implicit in our three recommendations is a trust and faith in the basic journalistic process of verification. Journalism is far from perfect. The New York Times does get it wrong, of course. Just like all media entities struggle with the selective interpretation of events, along with editorial influence over the tone and tenor of stories. But the inherent value of validated information is critical infrastructure that has been undermined by social media. Social posts are not news articles, even if they’ve come to resemble them in our news feeds. Verifying new information is a core part of any functioning democracy, and we need to recreate the friction that was previously provided by the journalistic process.
On the horizon are new technologies that will enable both decentralization and end-to-end encryption of social media — immune to any moderation. As these new tools reach scale, viral rumors will become even harder to debunk, and the supply problem of mis- and disinformation will only worsen. We should address how these tools might be designed to rebalance the flow of accurate information now, before we lose our capacity to do so.
This responsibility lands at least partially on our shoulders as individuals. We must be vigilant about identifying the inaccurate, and about finding established, reputable sources of knowledge — both academic and journalistic. Too much institutional skepticism is toxic for our shared reality. We can redouble our efforts to find ways of carefully, and compassionately, sourcing truth together. But platforms can help, and must help, tilt the design of our shared spaces towards verifiable facts.
Data-Visualizations by Tobia Rose-Stockwell


Read the rest of this article here