ASU to create ‘algorithmic armor’ to combat disinformation in media

Back to top

A public-private partnership will bring interdisciplinary researchers together to design tools for the defense against disinformation.

ASU to create ‘algorithmic armor’ to combat disinformation in media

Illustration by Meryl Pritchett

By Maya Shrikant

April 5, 2021

Sharing information through storytelling is a time-honored human tradition, whether it’s passing down family history around the dinner table or acting out scary stories around the campfire.

In the digital age, the glow of a computer screen replaces that of campfires and our storytelling happens through Snapchat stories and Reddit threads. But the qualities that make these stories relatable and memorable remain deeply human. 

When such stories cause harm by spreading disinformation, the solution requires an understanding of both their technological and human aspects. For this reason, the Global Security Initiative at Arizona State University has convened a cross-disciplinary team — from computer scientists to journalists — to investigate the types of language and logic that humans use to derive meaning and develop new ways to combat disinformation online. 

The Semantic Forensics (SemaFor) program, funded by the Defense Advanced Research Projects Agency (DARPA), aims to create innovative technologies to help detect, attribute and characterize disinformation that can threaten our national security and everyday lives. 

ASU will participate in the SemaFor program as a part of an $11.9 million federal contract with Kitware Inc., an international software research and development company. The project, titled Semantic Information Defender (SID), is aimed at producing new falsified-media detection technology. The multi-algorithm system will ingest significant amounts of media data, detect falsified media, attribute where it came from and characterize malicious disinformation.

“We are seeing the erosion of trust happen before our eyes — trust in government, trust in science, trust in media and trust in our fellow citizens are all deteriorating. When that trust has eroded and we cannot agree on basic facts, it becomes near-impossible to come together to address a range of concerns, including national security issues" -Nadya Bliss

A serious national security threat

The team at ASU is composed of disinformation detection experts, journalism professionals, humanities researchers and computer scientists. According to Nadya Bliss, executive director of the Global Security Initiative, the complex nature of disinformation systems requires an interdisciplinary team. 

“GSI is focused on leveraging ASU expertise to help achieve mission success in the national security sector, and there are few national security issues more pressing than combating disinformation,” says Bliss. “ASU has experts working on the challenge from a number of disciplines — education, journalism, narrative analysis, computer science and others. So, in some ways the Department of Defense’s Semantic Forensics program is a perfect fit for GSI to get involved in — a serious national security threat that requires significant interdisciplinary expertise to address, which ASU is uniquely positioned to deliver.”

Scott Ruston, a research professor and director of the Center on Narrative, Disinformation and Strategic Influence in ASU’s Global Security Initiative, leads the team as principal investigator. Ruston brings expertise in the areas of narrative theory, media studies and human influence to inform the design and features of the SID system. 

“Humans have a distinct use of language, concepts and ways of organizing events and ideas — such as stories — to share information and knowledge. Computers and digital media have accelerated our rate of information exchange and dramatically expanded the volume of information shared, but disinformation is fundamentally a human problem,” says Ruston. “Any tools that get developed to address disinformation have to take into account how humans communicate, how humans process information and how humans make decisions about their beliefs.”

For example, new stories that align with well-known, already-believed stories about oneself or one’s community are adopted more readily and less critically by readers. In the American context, our cultural understanding of events like the Boston Tea Party and the Revolutionary War cements the values of patriotism, protest and independence in our national identity. But disinformation actors can embed their false or misleading information within stories that are very similar to these iconic narratives or that explicitly reference key events, thus increasing their likelihood of adoption.

"Computers and digital media have accelerated our rate of information exchange and dramatically expanded the volume of information shared, but disinformation is fundamentally a human problem" -Scott Ruston

“The comfort with the underlying story makes the disinformation material more prone to belief and less prone to critical review,” says Ruston. “It passes that ‘gut check’ and leaves people prone to believing misinformation that emphasizes their values. We see this phenomenon of disinformation actors leveraging known stories of a community’s culture or identity occurring both at home and abroad.”  

Huan Liu, professor in the School of Computing, Informatics, and Decision Systems Engineering and co-principal investigator for the SID project, will oversee the creation of the disinformation detection algorithm. He stresses the importance of adaptability for analyzing the human dimensions of data being fed into the tool. 

“Disinformation is evolving and changing over time. And some news isn’t entirely false, with only bits and pieces of disinformation. We will have to teach this algorithm what media manipulation looks like today, and then the machine will have to identify these patterns of disinformation through its learning in order to be successful,” says Liu. “An algorithm that can adapt and learn from the characteristics of disinformation from various topic areas and social settings will make it a more useful tool.”

Arming journalists on the frontlines of disinformation

Though disinformation purveyors cannot alter your grandpa’s nostalgic stories during Thanksgiving dinner and deep fakes don’t appear in family photo albums, disinformation campaigns are attacking bedrock institutions that many people depend on for truths in the communication landscape: the media and journalism. For this reason, journalists play an important role in developing tools to combat disinformation.

“The incorporation of journalists expands the focus of the project from solely computer science and starts to incorporate the medium through which people get a lot of their information,” says Ruston. “The ecosystem around disinformation is a critical component of its power to influence people and its spreadability. Therefore, we must understand and analyze the standards and practices of the media industry.”

Kristy Roschke, managing director of News/Co-Lab in ASU’s Walter Cronkite School of Journalism and Mass Communication, and Dan Gillmor, a professor of practice also at Cronkite, are co-principal investigators for the SID project. 

“We know that journalists can — intentionally and unintentionally — be the biggest amplifier of disinformation, and we hope to develop a preventive system that is intuitive and useful for them as both targets and disseminators of disinformation,” says Roschke.

Journalism is an ideal attack surface for disinformation purveyors because of the industry’s obligations to report “both sides” of an issue, according to Gillmor. 

“Journalism is one of the antidotes to the disinformation ecosystem if the industry is willing to avoid being manipulated and help their audiences learn what is true and real,” he says. “Doing the first of these requires better tools and techniques to filter out the fallacies.”

According to Roschke, incorporating journalistic and media literacy expertise will help ensure that the SemaFor technologies have utility for journalists and news organizations who are on the frontlines of disinformation. 

“Journalists are already having to do their own disinformation detection as they vet and report stories, and most lack the tools and training to do so easily. Our goal is to make the project accessible beyond the typical government defense users,” says Roschke. 

"Journalists can — intentionally and unintentionally — be the biggest amplifier of disinformation, and we hope to develop a preventive system that is intuitive and useful for them" -Kristy Roschke

Rebuilding trust in a digital age

According to a recent MIT study, a story containing disinformation on digital platforms reaches 1,500 people six times faster than a true one.

Though disinformation is an innately human problem, disinformation purveyors are spreading dangerous information through digital media systems far faster than the human grapevine.  

“Often, the goal of disinformation purveyors is not only to mislead, but to sow confusion and create an information ecosystem in which no one knows what is true,” says Bliss. “This erodes trust in some of the institutions that serve as the bedrocks of our society and can help create environments that are ripe for unrest.” 

Campaigns spreading false information on vaccines, the severity of COVID-19 epidemic and the realities of climate change have confused readers and delegitimize lifesaving practices and policies. 

“We are seeing the erosion of trust happen before our eyes — trust in government, trust in science, trust in media and trust in our fellow citizens are all deteriorating. When that trust has eroded and we cannot agree on basic facts, it becomes near-impossible to come together to address a range of concerns, including national security issues,” says Bliss.

By arming journalists with better tools and improving disinformation detecting algorithms, the SID project will make it easier to weed out pieces of disinformation and trace disinformation creators. 

“If we just focus on the pixels and the digits, we’ll be in a media manipulation arms race against our adversaries,” says Ruston. “The only way we're going to be able to mitigate the dangers of disinformation is if we incorporate the techniques and insights from a range of perspectives.”

This material is based upon work supported by DARPA under Contract No. HR001120C0123. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of DARPA.

Approved for Public Release, Distribution Unlimited