Share

Disinformation: A New Challenge to Democracy or More of the Same?

Recent headlines on the role of Russian disinformation in the 2016 U.S. presidential election have ignited important policy discussions on the impact of information warfare on democratic systems. While disinformation is not new and has been used for years to turn the tides of policy in the favor of its perpetrators, developments with respect to social media, big data, and artificial intelligence mean that disinformation now poses a very different type of threat to democracy.

Disinformation is a tactic that uses false information to elicit a response that benefits its designer. It is distinct from misinformation and propaganda, with which it is often conflated. Misinformation is erroneous or incorrect information that originates from human or technical error, mistaken facts, or false facts and narratives unwittingly published. Propaganda, while it can also be used to evoke an intended response by its designer, can actually be based on fact. The difference to remember is that disinformation is always intended to confuse or mislead, and it is always based on false information.

One of the biggest threats of disinformation is its ability to undermine public trust in the core institutions of democracy. For example, today, Russian disinformation efforts are working to undermine the moral authority of the democratic model, including core pillars upon which democracy depends: the legitimacy of the press and citizen trust in government. Instead of extolling the virtues of its own system, Russia has sought to exploit fears and anxieties, stoking divisions within democratic systems.

The origin of disinformation can largely be traced back to the advent of mass media. In the 1930s and 1940s, Nazis used disinformation through anti-Semitic publications to incite fear and suspicion of the Jewish people. The term itself was coined by the translation of a Russian word, dezinformatsiya, which was the name of a division of the KGB that was devoted to covert propaganda. In the 1980s, a Soviet disinformation campaign accused the U.S. of spreading acquired immune deficiency syndrome (AIDS) for the purpose of biological warfare through the publication of articles in the Soviet weekly Literaturnaya Gazeta and the Patriot in India. The story was later picked up by papers in more than 60 countries across the world.

While disinformation has a long history, its reach, scale, sophistication and effectiveness have climbed to unprecedented levels due in large part to the ubiquity and viral speed of social media. Automation and artificial intelligence have dramatically lowered the cost of spreading disinformation at scale. Digital disinformation is sometimes framed as computational propaganda, defined as “the use of digital information and communication technologies to manipulate perceptions, affect cognition, and influence behavior.” Computational propaganda uses automated bots -- which often pose as legitimate social media users -- to artificially amplify the reach of disinformation. Bots can target individuals by collecting data on preferences and online behaviors.

Bots and computational propaganda have become a routine element of hybrid warfare. For example, in 2014, a missile brought down Malaysian Airlines flight MH17. Before the investigation into the crash began, a Twitter user named “Carlos” claimed to have seen a Ukrainian military aircraft in the area of the catastrophe, suggesting MH17 had been attacked by Ukraine. The story was quickly picked up by Russian news channel RT, and, shortly thereafter, the Russian Ministry of Defense held a press conference and showcased a fake satellite image with a fighter jet closing in on MH17. Soon after, the fact-checking website StopFake proved that the “Carlos” account was fake.

The pervasiveness and sophistication of disinformation has made it very difficult, if not impossible, for citizens to know whether the footage, photos or news stories they read are, in fact, real. In short, technology now allows disinformation to operate at a scale that can undermine the trust necessary for democracies to function. These challenges have led some to question whether democracy can survive the digital age.

A serious problem deserves a serious response. Future blog posts will explore tools, techniques and approaches for countering this threat. It is important to understand, that there are ways to deal with the challenge. Tech firms can “design for democracy” to prevent their platforms from abuse and to provide citizens with options to eliminate junk from their social media and news feeds. The same artificial intelligence that is used to sow disinformation can be harnessed to detect false stories, misleading images or automated social media accounts. But tech firms can’t do it alone: digital literacy and critical media consumption must also play a role. New technologies, such as blockchain, can provide additional methods of verifying identity and content.

NDI has been working on issues of disinformation and anti-democratic trolling for some time and continues to integrate new approaches to identify, analyze, disrupt and counter disinformation in its work with partners around the world. While the damage being done by disinformation is real, there are ways we can address it. Stay tuned for additional blog postings outlining NDI’s programs aimed at addressing the impacts of disinformation.

Make sure to subscribe to DemWorks to get notified about the next blog.