Skip to main content

Disinformation in the media: Two ASU journalism experts weigh in


Illustration by Meryl Pritchett

August 17, 2020

Editor’s note: This story is part of a series about the impacts of disinformation, how to guard against it and what researchers are doing to stop its spread.

Everyone’s seen them — Facebook posts from your great aunt or old high school friend that are entirely divorced from reality. They’re obviously false, but they have hundreds of shares. With each share, the falsehood gains validity and spreads disinformation further.

Disinformation has always existed, but with the advent of the internet and social media platforms, spreading untrue information is easier than ever.

“The internet allows people to take disinformation and rapidly disperse it to many other people and populations across geographical borders,” said Nadya Bliss, executive director of ASU’s Global Security Initiative. “Now pretty much anyone can institute a sophisticated disinformation campaign and it requires almost no resources to do it right.”

The Global Security Initiative works across disciplines to develop new approaches to security challenges and recently received a Department of Defense research award to combat disinformation by identifying and developing defenses against falsified media and adversarial narratives. This research will detect, characterize and attribute misinformation and disinformation, and help journalists identify and refute it.

Kristy Roschke and Dan Gillmor are Global Security Initiative affiliates working on this project. Roschke is the managing director of ASU’s News Co/Lab and a digital media literacy instructor in the Walter Cronkite School of Journalism and Mass Communication. Gillmor, a Cronkite professor of practice and co-founder of the News Co/Lab, also teaches digital media literacy at ASU. The two recently launched “Mediactive,” a free digital media literacy course.

Roschke and Gillmor met with a Knowledge Enterprise writer to discuss digital literacy and disinformation.

Question: What is the News Co/Lab and how did it come to be?

Roschke: The News Co/Lab is a small lab where we do research and work to advance media literacy.

Gillmor: After the 2016 election, when many bad actors were pushing disinformation, I posted a call to the big tech companies — Facebook, Google and Twitter in particular — asking them to get involved in media literacy. I thought we had to address not just the supply of information, but also demand, and do it at scale. 

My colleagues and I saw three primary ways to do that: education, media and the technology industry. The impacts we can make within education are uncertain because schools are governed at a local level. The second place where I thought we could scale was through the media. Media organizations, including journalists — but not just journalists — should make this a part of their mission to help people understand how to deal with information. Journalists had never done that in any serious way. The third area for scale was the technology industry. 

We held a news literacy working group at the Cronkite School that was co-sponsored by Facebook. People from the tech companies, data scientists, scholars, journalists and others gathered to discuss media literacy. The News Co/Lab came out of that meeting. 

We raised money to get the News Co/Lab launched in the fall of 2017. In 2018, Kristy joined as managing director and she's been very much heading it up since then. She's the driving force now behind all of this.

Question: Why is media literacy important?

Roschke: Media literacy, and the broader bucket, digital literacy, are critical to living in 2020.

When we think about literacy, we think of reading and writing. I really strongly believe that you cannot be a functioning person in society if you don't understand how to interact with technology, information and, more specifically, media. 

We have every opportunity to get whatever information we want, whenever we want it from whatever source we can, which is new in the 21st century. We have not equipped people to understand what to do with that responsibility. I think we are behind, and we need to catch up. So, I advocate for students learning about media literacy from the earliest grades all the way through adulthood. 

Gillmor: A world of low knowledge skills is one where lots of bad things happen, including some people just taking the word of whoever spoke most recently or most ardently. It becomes about whoever is the best demagogue or whoever is the most persuasive, not because they say things that are true, but because they appeal more skillfully to emotion.

We want a place where people decide whom or what to trust based on reality and the understanding that things are nuanced — and that we have to keep learning to arrive at a conclusion based on common, factual bases. Our own choices and decisions may not be the same as others, but the conclusions we come to will at least be based on a mutual understanding and grounded in reality. 

Question: What impact does media literacy have on democracy? 

Gillmor: I don't want to put a gauzy glow on the past and say everything was lovely before the internet arrived, because it wasn't. There's always been a major component of the population that chose belief over everything else.

Some individuals within every major institution that we thought we could or should trust have done remarkably corrupt things within those institutions. So we now have a situation where anyone who's motivated not to trust has a factual basis for that distrust. Any flaw is considered evidence of total flaw. That’s one of our really big problems.

People are now encouraged to base voting and other choices on the basis of what they want to believe. People are ignoring science and opting for what is in their guts. That strain of thinking has always been around, and it's always been human nature to look for things that support one's own belief system. 

Question: There is a lot of misinformation right now about COVID-19. Why is that and what should we do about it?

Roschke: The constantly evolving nature of this crisis has left people confused, scared and frustrated, which is fertile ground to sow disinformation. The most popular disinformation narratives fall along party lines, as does levels of trust in news outlets providing COVID-19 information. 

Gillmor: An ongoing issue is that what we know is changing, because this is a novel coronavirus and knowledge about it is developing through the scientific process. Journalists have an obligation to explain this clearly and patiently, again and again.

Because the response to the coronavirus has become — astoundingly and sadly — a partisan matter, people need to find health sources that are nonpartisan and based in science, not ideology. The same applies when considering the economic and cultural aspects of a situation that is overwhelming everything it touches.

I'd plead with journalists, in particular, to focus relentlessly on what the evidence shows — to put it in context and help the public understand the simple parts — e.g., why we should wear masks, period — as well as the staggeringly complicated interrelated elements of this crisis.

Question: One emerging form of disinformation is the “deep fake.” What exactly are deep fakes?

Roschke: Deep fakes are broadly defined as deceptive audio and visual content. It’s when you start to incorporate advanced technology like machine learning and artificial intelligence to make falsified video and audio very believable.

A deep fake would allow a bad actor to create very believable, simulated audio and video to make people appear to do and say things that they haven't said or done. In other cases, deep fakes might use machine-generated faces to completely make up people saying and doing false things.

There's a lot of attention being paid to that — think about not only how easy it would be to be misled, but also about what this does for credibility. We rely very heavily on eyewitness accounts of things and video has always been one of the most trustworthy ways that we confirm information, because seeing is believing. So, what happens in a world where someone can say, “That's not me, I wasn't there”?

Deep fakes are like the dystopian future that we might find ourselves in, in one year, five years, 10 years, who knows? But in the world we live in now, there are still plenty of ways to deceive people without needing super sophisticated technological information or advancements. Media literacy can help people ward against these things. We're not powerless in this. We need to take responsibility for what we know about information and media and we need to act with informed intent.

Question: What happens in a world where technology is so advanced that people can lie about what they've said or not? 

Gillmor: This idea is not new. I don’t think deep fakes are as big of a problem as we worry they are, yet. What concerns me is the idea of these artificially created videos being mass customized and mass targeted at a personal level. 

So, suddenly that untrue video you receive — of someone doing something or saying something terrible — will be slightly different than the one that I get, each one designed to push our personal buttons. That idea is going to become easier to execute, and that’s potentially pretty alarming. 

Roschke: A politician, for instance, could feel comfortable saying, “I don't care if you have that video. That video is fake.” We live in a world where there's enough cynicism about how information is produced to believe that's possible, and that is pretty terrifying. 

We definitely need to prepare ourselves for this. But what they're calling “shallow fakes” also exist. Although these are easy to detect, they are still confusing people. For example, there was recently a video of Nancy Pelosi where she was giving a speech, and someone slowed it down so that it sounds like she's very drunk. Slowing down the audio is an easy thing to do in a video, so that's not a deep fake, but it’s something that is confusing people today. 

Question: What kind of media literacy training exists in schools in the U.S. today?

Roschke: Typically, media literacy is going to be in English classes or social studies classes in the context of how to construct an argument and how to find good sources. Students will also learn some historical context for propaganda. You'll see some media literacy taught in journalism classes, where they exist in K–12. 

The News Co/Lab advocates for a set of skills that should be taught in all subject areas and just reinforced throughout schooling, as opposed to tick that box of a high school class requirement, because it's not a concrete or discrete subject that exists in a vacuum. It touches on everything we do. 

Question: What responsibility do social media companies have in fighting the spread of misinformation?

Gillmor: I don't want a few giant companies to be the editors of the internet. People are demanding things from the tech companies that add up to that. They're in a very difficult position. I call this the, “do something about it” brigade. If there’s bad stuff on YouTube or Google, and there definitely is, they demand the companies do something about it. I believe that's a potentially dangerous request. 

Of course, it's much more complicated than that. Certainly, Facebook and Google are already editing by virtue of the algorithms they use to promote or demote what shows up in people's feeds or recommendations. Do we really want them to be forced to make granular edits of individuals’ speech, and in effect have the power to overrule the First Amendment? Does the Facebook terms of service, given the degree to which conversation now takes place on Facebook, overrule the First Amendment in that public square? Yeah, it does. And that worries me a lot. 

Roschke: I think that the responsibility is to squash the information that has been proven to be false via fact checking mechanisms, like what they do with things that violate terms of service. If it's hate speech and that sort of thing, there's some automatics that they get downplayed, and I think should be removed entirely. But if something has been fact checked and proved false, it gets labeled as such. On Facebook, if you try to share a post that’s false, it will have a gray box over it that says, “This has been fact checked and proved false, are you sure you still want to share it?”

I think those types of activities could help. There should be a squashing of misinformation that's been proven and there should be an up-play of quality information.

Question: How can I reduce misinformation in my social media feed? 

Roschke: We don't have to be experts to just make a couple of subtle changes in our life that might make it better. If your Twitter feed is riddled with junk, maybe it's time to take a look at who you're following and stop following some of those people. In a certain way, if misinformation is the tree that falls in the forest and you're not around to hear it, you're not impacted by it. 

There are things we can do, and that's where my attention is to just make us more aware and better able to participate in this environment in a way that we have more control, we feel a sense of responsibility and empowerment. That's not going to solve the problem, but it's certainly not going to hurt. And I think it will help in great ways that can't be underestimated.

Gillmor: Yes, it’s the idea of supply and demand. Supply is what people publish. Demand is how we handle and react to what people publish. And the place where supply and demand intersect most obviously is sharing, because sharing is an act of publishing, but it's triggered by your consumption.

But, again, I'm wary of solutions that would have the effect of curbing freedom of expression. I think the consequences of making it illegal to lie — the Supreme Court has made it clear that doing so violates the First Amendment in general — would be much worse than where we are already.

I come back to trying to improve the supply better by working with people. If we can get the people who want to do it right to do it right more often, that moves the needle. Because the media we rely on has lots of flaws. We want to help them do better — and they want to do better.

And we can improve the demand, so that at some level it becomes less profitable to con people. It’s a similar issue to the demand in society in more recent decades for better, healthier food. 

Question: How can your average person combat misinformation online?

Gillmor: Take a breath before you believe stuff that is designed to trigger your emotions. 

Read widely on things that interest you. Don't stay with single sources. Diversifying our information diet, or consuming information from a variety of sources, is one of the most powerful choices we can make to combat disinformation. 

Ask your own questions. If you do that, you've made a good start.

Written by Madison Arnold

More Law, journalism and politics

 

Portrait of professor in his office

School of Politics and Global Studies director's new book explores mass violence

Why do people commit atrocities and why are certain groups, including religious and ethnic, more vulnerable to large-scale…

April 11, 2024
A group of four faculty members pose for a photo in an office.

ASU faculty contributing to improvement of Wikipedia

Many academics have a love-hate relationship with Wikipedia. While the website has information about almost anything you can…

April 09, 2024
Exteriror of the ASU California Center building in Los Angeles.

ASU Law students gain vital experience through Los Angeles location

Students at the Sandra Day O’Connor College of Law at Arizona State University may be concentrated in the school’s downtown…

April 08, 2024