What makes some people more susceptible to disinformation?

New research is shedding light on why certain people are more susceptible to disinformation and what motivates individuals to choose to join hate campaigns. Photo credit: Dave Haygarth

Disinformation is one of the thorniest problems facing citizens online around the world today. Recent reports – from the Oxford Internet Institute’s survey of computational propaganda in nine countries around the world to Freedom House’s findings that 18 countries saw disinformation around elections in 2017 – have highlighted that the problem is not only present, but indeed it is becoming more grave in the absence of proper solutions to combat it. Tim Berners-Lee, the inventor of the World Wide Web, wrote just a year ago that it was one of the three main challenges facing the web and its users in the future.

Properly addressing this challenge without infringing on users’ right to free speech is at the center of modern disinformation researchers’ efforts around the globe. Threading this needle lies in recognizing that the problem is both computational and socio-political in nature, and finding corresponding solutions in both domains. While considerable thought and research have been dedicated to technological solutions (such as making political bots transparent), efforts at understanding the human mechanics of disinformation are still nascent. Exploring what demographics are most vulnerable or most likely to be targeted, why they are receptive to disinformation, and the mechanics of how disinformation spreads within their networks online and offline is key to finding effective solutions in the long term.

While considerable thought and research have been dedicated to technological solutions, efforts at understanding the human mechanics of disinformation are still nascent.

To that end, this month the Digital Intelligence Lab, a digital rights NGO based in Palo Alto, CA, is releasing a paper I co-authored with two colleagues – State-Sponsored Trolling: How Governments are Deploying Disinformation as Part of Broader Digital Harassment Campaigns. In this report, we explore how governments around the world are weaponizing surveillance systems and propaganda apparatuses to intimidate and silence individuals critical of the state. State-sponsored trolling sees many problems familiar to digital rights researchers – disinformation, unlawful hacking and surveillance, the usage of bots and computational propaganda – combine and metastasize into a larger phenomenon.

An insidious element in these campaigns is that regular citizens who are not motivated by payment or sponsorship of the state may still join in on the hate and harassment campaigns after being incited by disinformation. Observing this, a key part of our methodology in this report was interviewing subjects who have been targeted with state-sponsored online hate and harassment campaigns to understand the human impact of them, as well as possible motivations for citizens who organically joined in on hate campaigns.

In a similar vein, NDI has recently developed a set of innovative opinion research methodologies to help to shine light on the human side of information flows as part of its INFO/tegrity Initiative. The results of its first research efforts into answering these human elements of disinformation in Ukraine have been shared with partners in Silicon Valley and Washington, D.C., and will be integrated into NDI's efforts to combat disinfo worldwide. In addition to these research efforts, some laudable social solutions to the problem of disinformation are already underway – such as Taiwan’s new digital literacy school programs or Twitter’s on-the-ground cooperation with localized NGOs and experts. These efforts, when coupled with research that elucidates the human mechanics of disinformation, will ultimately give us the greatest chance of solving the social side of this problem.

While rectifying disinformation’s ramifications can seem far off, impulses towards negativity have been proven wrong before. For instance, Steven Pinker proved in The Better Angels of Our Nature that humanity has counterintuitively grown less violent throughout its history. Garnering an understanding of both the technological and social sides of disinformation empowers us to solve the problem holistically, rather than attacking it with piecemeal and short-term solutions. In the wake of last month’s Cambridge Analytica revelations, Tim Berners-Lee wrote about his vision of possibilities for an internet that still benefits humanity:

You can fix it. It won’t be easy but if companies work with governments, activists, academics and web users we can make sure platforms serve humanity.[…] My message to all web users today is this: I may have invented the web, but you make it what it is. And it’s up to all of us to build a web that reflects our hopes & fulfills our dreams more than it magnifies our fears & deepens our divisions.