Nadya Bliss, director, Global Security Initiative: Thank you.
“Bound to fail” – those are powerful words. I have heard those words quite a few times in my life. Sometimes they take a slightly different form – a popular version is “You can’t do that” or “You are not supposed to do that”.
The first time I remember hearing those words I was 5 or 6, trying out for ballet, back in the former Soviet Union. They could tell that I wasn’t going to be particularly tall – I didn’t really “look” like a ballerina in the making. I didn’t end up doing ballet. But I also think that was the last time I let those words stop me.
When I was little, I wanted to be a mathematician. In the Soviet Union, being a mathy girl wasn’t weird or discouraged, but I realized things were culturally quite different when my family moved to the United States when I was a teenager (yes, a great time to love math and change countries).
As a high-schooler, I realized computer science allowed you to leverage much of the mathematical rigor in ways that often let you see the impact of your work in a tangible and beautiful way.
In my high-school programming class, I was one of the very few girls; when I majored in computer science at Cornell, I was often 1 of 4 in a 200-person class.
When I decided to do my masters and bachelors in four years, many of my friends thought I was crazy. I probably was a bit. I survived and landed a dream job as a staff scientist at MIT Lincoln Laboratory – a national laboratory developing technology to address national security challenges. There, I ended up being the youngest group leader in the more than 60-year history of the Laboratory. I founded the Computing and Analytics group and led large-scale research initiatives to address computational challenges for the Department of Defense and the Intelligence Community.
When I came to ASU, I decided it was important to write up my close-to-decade worth of research on graph theory as a dissertation. And so I completed a PhD in approximately a year an a half while working full time, first as an Assistant Vice President in Knowledge Enterprise Development (KED) and then, as the director of the Global Security Initiative.
Many times along the way, there would always be many (often incredibly well meaning) who would often say that all of this is impossible or that I can’t do it or that no one has done it. Quite frankly, for me that simply fuels the fire. Don’t get me wrong – I realize today that ballet probably would not have been for me and focusing on math was a much better choice. But from then on, I have always made sure that that choice was made by me and not for me.
I haven’t had what one would consider a traditional academic career; yet, I have always focused on taking the most innovative research and applying it to the most challenging problems in security. Those components together: innovation AND impact are what drive me (and have driven me for decades).
Today, we face many highly complex challenges both nationally and internationally. From security of our information networks, to planning for and managing natural disasters, to emergence of new infectious diseases, to social and political conflict throughout the world - these challenges are messy and highly interconnected. As an example, cyber security touches on pretty much everything in today’s society. A rather simple vulnerability like not checking the validity of a webform input could potentially allow compromise of our election databases.
As another example, our energy delivery infrastructure requires resilience to both cyber attacks and natural disasters. At stake are often confidential information, economic losses, damage of equipment, and power outages leading to greater socio-economic impact, to just to name a few.
Similarly, it is impossible to talk about new epidemics without considering both environmental factors and travel patterns of our citizens. So we often try to simplify. We try to make these problems somewhat more tractable. I am here to claim that it is precisely this desire to remove complexity in fear of failure that often prevents us from being ready to face these challenges.
So let’s get back to those words: “Bound to fail”. Those actually come from the first sentence of the abstract of a research paper from 1973 titled “Dilemmas in a General Theory of Planning” by Rittel and Webber. Why this paper? The context for those words is that the authors claim that you cannot address these messy interconnected problems with science and engineering. In fact, they define these types of problems as “wicked” – not in an evil sense (and not because I am from Boston), but as compared to tame. As described in the paper, a few of the properties of these problems include: lack of well scoped definition, no ability to test if a solution is the right one, and the fact that testing a solution has the potential to change the problem.
What does all this mean? Let’s consider something like securing the internet – we can’t really start from scratch, we can’t make a fully secure processor (without removing all functionality), and any solution we do deploy has the potential to set off a sequence of unintended effects. An example of such an effect could be potential loss of privacy as data collection is increased to provide better predictive power for compromise of one’s identity. Or an introduction of a piece of software that tests validity of a code could potentially slow down an application and lead to frustrated users.
How about another example: emergence of social and political instability? Again, not something that can be completely eliminated and often root causes can be difficult to identify. As both established and emerging economies grow, they stress our food, energy, and water systems, causing competition for resources and contributing to resource insecurity. How do we disentangle radicalization, resource insecurity, and economic pressures? How do we know that our development programs provide relief to areas in the world that are struggling? Does that mean that all is hopeless? Are we “bound to fail”?
I absolutely do not think so. You probably knew I was going to say that. But how does an engineering-college trained computer scientist who spent over a decade engineering technology for national security, make progress on something that has been declared unsolvable by STEM (science, technology, engineering, and math) techniques?
First, we have to try. It is imperative that we increase the engagement of engineers and scientists in these messy problems. And not just engage, but have the STEM disciplines work closely with policy makers, social scientists, political scientists, along with many others. It is absolutely impossible to address any of these problems with a single discipline. Often, people think that mathematicians, and computer scientists, and engineers are narrow in their thinking and encourage simplification.
But instead, I am here telling you to embrace it. Not only that, I would actually claim that computer scientists specifically are well suited to this task - we are taught to formally appreciate complexity at a very early stage in our training. I also think that computer science is inherently collaborative and interdisciplinary – if we want to build an algorithm to do something of impact, we shouldn’t do it alone.
My personal research is on analysis of graphs, or the mathematical structures that can encode relationships or connections between entities and concepts. So from where I am standing, not only are these wicked problems tame-able, we can leverage what we know from graph theory to help us on that path. So a way to effectively manage complexity, but not ignore it is to explicitly account for the interconnectedness of these problems. It is true that addressing all the messiness at once is impossible, but that should not prevent us from making progress.
Second, we can observe that at the core of all these challenges is the notion of “planning” (it is even in the title of that original paper – “Dilemmas in a General Theory of Planning”). Instead of responding to a disaster (regardless of whether it is a cyber breach, a natural disaster, or an epidemic) how do we plan for it? How do we become proactive, instead of reactive, in making our world more secure?
This framing allows us to make measurable progress – progress towards better analytic and decision systems that account for the messiness of the real world without oversimplification. As an example, we can develop anticipatory models of spread of disease that are coupled to changing climate patterns. That is a challenging task – data and models for disease and climate often come in inherently incompatible scales and formats. But if you bring together hydrologists, climate experts, disease experts, and computer scientists, you can start to not just anticipate where the next epidemic may arise, but also plan for appropriate healthcare infrastructure to manage it.
In another effort at Global Security Initiative, we are working on developing tools to anticipate instability through analysis of trade networks. In 2011, a drought in China’s wheat-growing regions contributed to revolution in Egypt partly because of trade interdependencies. What we are working on is developing an anticipatory methodology to identify other regions that could be susceptible to similar events. It turns out that patterns of trade provide insight into regional stability. As a matter of fact, we can see patterns of trade for countries that are considered stable and those are drastically different from the patterns for the countries that are not. But what is even more significant is that the tools we are developing can be used by a planner to potentially enable proactive intervention.
In cybersecurity, proactive approach is a must. New vulnerabilities are constantly being discovered and built into brand-new attacks that can break into sensitive databases or take down servers. Attacks are often bought and sold for large amounts of money on dark web forums – online meeting places that can’t be reached with standard web browsers. Researchers in our Center for Cybersecurity and Digital Forensics scrape data from dark web forums where exploits are sold, and analyze them. Last year, our research team found a never-before-seen attack before it was deployed “in the wild” – this gave a chance to the community to really plan the defenses for it.
Finally, we have to accept that none of us can do this alone. I have always wanted to do research precisely because I wanted to make a difference. It may seem like spectral graph theory is pretty esoteric of a field. And yet, in all of the examples that I’ve talked about understanding connections between different elements of a problem provides a way to see how the puzzle pieces fit together. In addition to understanding connections, we see a few other common themes – diversity of time scales to understand how historical events have impact on anticipating and planning for the future, large, complex datasets coming from variety of sources, and the need to bring together disciplines that have not traditionally worked together.
These commonalities allow us to apply what works in one area to others, thus making progress on what may seem unsolvable. They also allow us to fully embrace the complexity of the entire security landscape without compromising our goal of impact. But, if our goal is research with impact, failure, especially of the kind where you learn something and you get up and keep going, is not a bad thing.
It makes us tougher. It teaches us how to be better humans. And it allows us to make progress towards a more secure world.
Oh, and one more thing. My five-year old daughter is currently doing ballet.