When Sharing isn't believing

Why do people share misinformation online?

Why do people share misinformation online? In a conversation with Jimmy Narang '23 on his job market paper, we learn about the behavior of people who share misinformation on WhatsApp. His research finds that people are disproportionately likely to share misinformation even when the sharer doubts their veracity.

What made you interested in this topic (of misinformation)from an economics point of view?

It was a confluence of 2-3 things. One was just a professional/academic interest in graph theory & networks. I was a software engineer for many years at Microsoft and a CS major in college, and graph theory was an enjoyable part of both school and work. In 2013 I quit Microsoft to work as an RA for Prof. Banerjee and his colleagues1, who were studying how information diffuses in rural Karnataka (India), and I enjoyed both the theory and the fieldwork side of things.

A second path was more personal. By 2013, WhatsApp had already become a major source of mis- and dis-information2 in India.  I would receive a daily deluge of “forwards” — clickbaity financial scams, rumors of Ibuprofen laced with poison, revisionist history villainizing minorities – stuff that really made me wonder how well I knew my friends or family. It was upsetting and disorienting, but it wasn’t clear how I could translate that concern into a worthwhile research question. 

I think that point came in 2018, when a (relatively apolitical) rumor — claiming that child traffickers were on the prowl to harvest kids’ organs — went viral in the country, and over 70 adults were lynched3 by mobs on the suspicion of being kidnappers. In an interview that I can’t find now, a resident said to a reporter (and I paraphrase): “Yes I sent the story to all my friends. I wasn’t sure if it was true, but what if it was?” 

– and that stuck with me. I felt that a common theme in misleading stories – home remedies for covid or moral panics – was that they get disproportionately shared even when the sharer doubts their veracity. But if the person receiving that story doesn’t fully appreciate the sharer’s doubt, they may trust the story more than they should. I wanted to see if there was any truth to that. 

You designed the experiment using a custom social media platform. Why was it important to do that rather than observing trends on an existing one (e.g. Facebook or Twitter)?

I think WhatsApp or Messenger are closer to the platforms I had in mind when designing this study – on Twitter or Facebook, you can see the number of likes, retweets and other confounds that obfuscate the thing we’re trying to measure. Still, the broader question stands: why not use an existing platform?

One reason is privacy. messaging platforms are encrypted end-to-end (as they should be!) so companies cannot share this data even if they want to; and users are naturally reluctant to reveal their private accounts or histories to a random researcher. 
But another reason is this: even if I magically had access to that data, it wouldn’t be enough to test the hypotheses outlined above. For e.g., I wouldn’t be able to tell if sharers doubted the stories they share; nor would it tell me their sharing decisions lead to any changes in recipients’ beliefs at all. 

Finally, to understand the mechanisms underlying participants’ behaviors I needed to provide them with custom signals/information they wouldn’t see on a regular platform; and so I cobbled together something of my own.

Can you tell me more regarding “mechanisms”?

Sure. Suppose you – the receiver – think a story is 20 percentage points more likely to be true when your friend Arlen shares it, than if you had seen it “directly”. Is 20% too much? Too little? If it’s too much, is it because you mistook Arlen’s sharing decision as a sign of his belief in the story; or because you overestimated how well Arlen’s beliefs predict what’s true? Or: maybe you assessed all signals correctly, but there was something idiosyncratic, non-Bayesian about how you incorporated that information into your existing beliefs. 

To identify/disentangle these mechanisms, I provided participants with some custom “signals” (information) they wouldn’t see on a traditional platform.  For example, instead of saying “Arlen shared a story with you” I might reveal “Arlen though this story was 60% likely to be true”, and see how you update. Or I might provide you with a “clue” from some third party (instead of Arlen-related info) and see how you updated instead. 

What are some conclusions you reached? What do you hope people learn from this experiment? 

Let me start with two stylized facts about sharing, and then give some results about the impact of sharing on receivers’ beliefs.

On average, sharers’ beliefs predict veracity: the more strongly they believe a story is true, the more likely it is actually true, and this is across a range of topics, political valence, and story-selection algorithms. Thus, if you’re the receiver, there is value in knowing what your friend thinks about the story and (slightly) changing your opinion. 
But the same person’s sharing decisions about the same stories do not predict veracity: people share true and false stories at roughly equal rates, and many stories (~25%) are (earnestly) shared even when the sharer thinks there’s less than 50% chance they’re true. Thus, if you’re the receiver, you shouldn’t really change your belief about a story because your friend shared it.

But unfortunately, that is not how receivers behave. Here is what they do instead:

Receivers interpret sharing as a sign of a story’s truth, and update as if stories shared by their friends are ~66% likely to be true (controlling for a bunch of factors). Put differently, sharing causes receivers’ beliefs to increase. Thus, if a poorly believed story is widely shared, it ends up being widely believed

Moreover, even coming from the same sharer/friend, receivers don’t sufficiently adjust for why the sharer may have shared the story. They don’t account for how the story’s importance, urgency or coolness may have led their friend to share it despite their doubts. This “shareability neglect” disproportionately benefits false stories because they are more likely to be shared at low beliefs. 

As an aside: People rarely stated why they shared a story when forwarding it, even though they could. Less than 10% of stories had any accompanying comment from the sharer, and less than half those comments hinted at the sharer’s own belief/doubt. If we explicitly reveal that doubt, the change in receivers’ beliefs was much smaller.

What do you think the main takeaways for people would be?

If a friend forwards you a story, consider: is it the kind of story they would share even if they weren’t sure of its truth? If so, maybe hold off on believing it too much. (You could even ask the sharer’s opinion explicitly). Also, for every friend who decided to share the story with you, consider there might be several others who – perhaps wisely – decided not to share it.

On the flip side, if you’re about to share something – because it feels important, informative, useful or outrageous – give receivers some sense of how strongly you believe it yourself. Or momentarily consider what the consequences would be if the story later turns out to be false.

Arun Chandrasekhar, Emily Breza, and Markus Mobius.

Disinformation is false or misleading information that is shared with a deliberate intention to mislead. It’s more strategic. Misinformation – the focus of this paper – is not.

See here (Economist), here (Buzzfeed news), here or here (NYTimes).

Why do people share unverified information?