Disinformation increases social divisions and undermines our democracy. ASU Research Scientist Scott Ruston says that defending ourselves against this willful deception is part of our civic duty — and explains how to do so.
Illustration by Meryl Pritchett
By Madison Arnold
June 26, 2020
Editor’s note: This story is part of a series about the impacts of disinformation, how to guard against it and what researchers are doing to stop its spread.
From yellow journalism at the turn of the 20th century to conspiracy theories about COVID-19, disinformation is, and has always been, a risk of media consumption.
Researchers at Arizona State University study disinformation to understand how it shapes the social and political landscapes that influence daily life, and to help us recognize and avoid it.
“We would like to identify ways where we can both make the population more resilient to disinformation and create tools and technologies that allow detection, and essentially mitigation of the spread of disinformation,” says Nadya Bliss, executive director of ASU’s Global Security Initiative.
GSI works across disciplines to develop new approaches to security challenges that are “global in scale, borderless by nature, interdependent and often have no clear solutions.”
Research Scientist Scott Ruston leads GSI’s Narrative, Disinformation and Strategic Influence research pillar. With over two decades of active and reserve service in the U.S. Navy, Ruston now helps strengthen the nation’s security through his expertise in narrative and strategic communication.
Ruston spoke with a Knowledge Enterprise writer to discuss his research on disinformation, how it spreads and how it affects our lives.
What is disinformation and how is it different from misinformation?
The commonly held definition of disinformation used throughout the government and the academic community is false or inaccurate or misleading information spread with a willful intent to deceive. The “or” between false or inaccurate or misleading is important. Disinformation can be a perfectly truthful piece of information that is packaged in a misleading manner or context and thereby used with an intent to deceive.
Although the definitions are similar, the distinction that we use is the willful intent to deceive. Misinformation is false or inaccurate or misleading information that has spread for any number of purposes. A piece of disinformation can be started with somebody who has malicious intent, but then gets unwittingly picked up by somebody else who then spreads that information further as misinformation. This fluidity makes for research challenges.
How do you study disinformation?
My expertise lies in examining narratives and how we tell stories. Narratives are systems of stories — the events recounted in a news article are a story that intersects with stories shared between friends and stories of national or ethnic identity.
A narrative is a core human method for making sense of the world around us by organizing these stories into a structure that includes cause, effect and consequence. How those agents (often people, sometimes organizations or nations) engage in those actions express values and principles, and also express cause and effect. In that way, narratives provide frameworks for understanding the world around us.
Another key thing that I look at is the flow of information. I look at how information spreads. This is through examining different pathways and networks, as well as identifying at what level the media shares information. I work to identify platforms and mechanisms people use to spread disinformation. I also look at the rhetorical mechanisms used to communicate information.
How can disinformation affect democracy?
In many ways the fundamental underpinnings of democratic society are at risk because they depend so much on willing, informed citizen participation and expression of political will. And if the basis of those decisions made by the citizens is corrupted by disinformation, then that's a hijacking of domestic society.
Democracy depends on the expression of political will to have people that are representing your interests be elected to office and then they work together. We're at a moment right now where there's not a lot of that working together happening.
You can imagine if a politician crosses the aisle to do some sort of bipartisan work, which is valuable to everybody, but gets attacked by a disinformation campaign that misleadingly characterizes what that work is and reduces that politician’s potential electability or real likability. Well, now that politician is less likely to reach across the aisle anymore. And now we ended up in this pendulum swing of politics, which ultimately results in paralysis and we can't get anything done. That means decision-making for our protection of our national interests is compromised.
Furthermore, disinformation enhances social divisions and can affect who’s coming to the polls to vote, such as making people feel their vote is worthless and depressing turnout (as well as the reverse). There’s indications, too, of disinformation providing misleading information about where to vote, thereby disenfranchising people by obstructing their ability to cast votes.
What role did disinformation play in the 2016 election?
In the analysis of the 2016 election, many activities were attributed to the Internet Research Agency, which has links to the Russian intelligence services. They posted on Facebook information that was specifically targeted to appeal to certain political groups, or they communicated events, activities or ideas that would stimulate an angry reaction by the other side of the political spectrum.
An example of the Internet Research Agency’s work was the creation of a group called Black Matters, which sounds an awful lot like Black Lives Matter. A lot of their postings were ideologically consistent with the claims of Black Lives Matter. They would show a picture of a crying African American person with a caption like, “My brother got arrested and beaten to death by cops.”
Such posts and memes contribute small stories into a broader system of stories (a narrative), creating a conflict between the African American community and law enforcement. These particular posts happened to be fabrications, but they play on the other events that have happened to stoke further outrage.
There's no domestic American offshoot of the Black Lives Matter movement that is called Black Matters. It was completely made up by the Internet Research Agency. They were able to purchase sponsored posts and, through the demographics that Facebook collects, target their posts to the feeds of people that are ideologically aligned with the Black Lives Matter cause, as well as people who were deemed by demographics to be absolutely opposed to Black Lives Matter. There was no genuine, honest, grassroots, American-based organization that was organizing the counter protest events that Black Matters was promoting.
When does disinformation become a national security issue?
Looking at the 2016 election, the work of the Internet Research Agency is alarming for its cleverness and subtleness. If you read the individual posts, they're not subtle at all, but the overall design of the campaign, it's pretty subtle.
It's not like a billion-dollar aircraft carrier or a super-secret silent submarine that the Russians built. They put a few tens of thousands of dollars into buying some sponsored posts and creating some groups that have names that are very appealing to specific sides of the political spectrum. They were very specific and targeted disinformation that was designed to stimulate unrest.
A whole lot of the basis of the society that we have come to enjoy in the 21st century in the United States is based on some taken-for-granted elements of how our society operates. We think everything's codified in law, but really it is in our faith in these institutions like the education, financial and justice systems. Disinformation campaigns that erode faith in those systems can cause civil unrest and that can snowball pretty darn fast.
For example, if there was disinformation seeking to erode faith in the banking system, the situation could create the level of societal disruption of the great recession. Because however much money you have in any account that you can claim, you don't have it physically. It's not in gold coins and it's not in currency. It’s in a computer system somewhere, that says you have $100 in a checking account. Well, if you lose faith that the computer system is telling you the truth, you're going to behave differently. You're going to take all the money out of it.
If you take all of your money out, and 300 million people nationwide take all their money out, now there's no money to move around. The economy depends on that money moving around. Now we have some form of societal degradation that possibly leads to collapse.
Listen to Ruston's episode of ASU KEDtalks: The Podcast:
I’ve noticed you haven’t used the term “fake news” at all. Is there a reason for that?
I don't like the term fake news because it has been co-opted as a label by a lot of politicians on both sides of the aisle simply for coverage that they don't find flattering.
Why are social media platforms such fertile ground for disinformation?
Social media has an imperative to deliver you content that it thinks you want, based on what it knows about you. For example, everybody thinks that Facebook is free, but Facebook is fundamentally not free. You are paying for Facebook with your personal profile and your network. You're giving up an enormous amount of information about yourself that Facebook can then aggregate and feed to marketers.
Any action you take on social media is tracked, and that’s valuable information. Social media platforms write their algorithms in such a way that things will be fed to you that you're more likely to take action on. One reason why so much material on social media ends up so extreme, especially when it's politically inflected material, is because people will react and share it.
It’s also important to recognize that information coming to us via social media isn’t segmented into “news,” “analysis,” “opinion” and “advertisement.” Those distinctions, which have been clear in print, radio and television media for decades, are less clear on social media platforms. The line between factual news reporting and opinion-based journalism has blurred significantly.
What’s your advice for people to avoid being deceived?
Read widely. Look at how the same core topics and events are reported on in a handful of news outlets.
Determine whether an article is news or opinion. Reputable journalism platforms, whether major newspaper or television news, will distinguish. Opinion journalism is a valuable thing to include in your reading — the analysis might illuminate interesting ways to think, or potential ramifications of an issue — but always keep in mind its status as opinion.
Be attentive to time. Disinformation very often uses photos to imbue some truth value, but they might be 10 years old. Or a claim may be circulating again but was debunked long ago.
Confirm the information with another source. Use Google or DuckDuckGo or Bing to search for the topic and alternate source corroboration of the event. If the post makes claims about a bill — go read the bill! And, use fact-checking sites like Poynter and Politifact to add insight and context, and also be attentive to whether journalists (or your friends on Facebook) are citing primary sources and if there are multiple corroborating sources. Good journalism attributes their sources and finds corroboration before publishing.
Pay attention to your emotions. If you read something and if your reaction is any sort of extreme emotion, whether outrage or unmitigated joy, that’s a clear indicator that you should definitely read more deeply. If you read a tweet and you want to pick up the phone and call your congressman and scream about how awful some situation is, then it’s very likely that your buttons are being pushed and there's some sort of manipulation happening there.
Does defending yourself against disinformation take work? Absolutely. But living in a democracy is a privilege and as citizens we have a duty to discharge our civic responsibility—and defending ourselves against disinformation is one part of our contribution to a thriving, healthy democracy.
GSI is partially supported by Arizona’s Technology and Research Initiative Fund. TRIF investment has enabled thousands of scientific discoveries, over 800 patents, 280 new startup companies and hands-on training for approximately 33,000 students across Arizona’s universities. Publicly supported through voter approval, TRIF is an essential resource for growing Arizona’s economy and providing opportunities for Arizona residents to work, learn and thrive.