Robots and artificial intelligence are growing more and more autonomous. But they still need to work smoothly with humans to be effective. ASU Professor Nancy Cooke explores how humans, robots and AI can best work together as a team.
As a teenager I dreamt of being on my own — free of parental oversight and financial ties. I wanted to be autonomous. I wanted to be independent. I got a job, I found an apartment, and I started to work my way through college. I was so excited, I was living the dream, I was independent. But soon enough I realized I could use help from some roommates and I also took my parents up on the occasional offer of a hot meal and the use of their laundry facilities. And then I started thinking, “Am I really independent? Am I autonomous? Not really.”
Now, years later, I find myself studying teams. We’re all dependent on others in our social system. Human achievement is based on our collaborative nature. We are not autonomous and that’s a good thing.
Advances in technology have enabled machines to take on tasks autonomously — self-driving cars is a good example of that. But if we envision a world in which humans exist and prosper, that “autonomy” is going to need to be able to work alongside humans as teammates, in areas such as medicine, manufacturing, space exploration, transportation and in the military.
But building these strange teams of humans and artificially intelligent agents and robots requires an appreciation for system science. Because how do you get all those moving parts that make up a human-artificial intelligence-robot team to work together seamlessly without unintended consequences? And new scientific discoveries will need to be made. How can robots best communicate with humans? We need new models of mutual trust and understanding.
I am not autonomous, and neither is autonomy. I am interdependent, not independent.
I am embedded in a rich sociotechnical system, in which I depend on others. And if we expect robots or AI agents to be in this social system with us, they need to be developed to be a good teammate. No man, nor robot, is an island.
ASU KEDtalks: The Podcast with Nancy Cooke
Diane Boudreau: Welcome to ASU KEDtalks: The Podcast. I’m your host, Diane Boudreau and I’m here today with Nancy Cooke, an ASU professor of human systems engineering. Nancy studies teamwork among humans, robots and artificial intelligence. Welcome, Nancy.
Nancy Cooke: Hi. Thanks for having me.
Boudreau: First of all, what is human systems engineering?
Cooke: Well, that’s a good question. It’s the bringing together of psychology and engineering. Sometimes I say it’s the marriage of psychology and engineering. But it’s designing systems, devices, machines, robots, anything that a human might come in contact with, to be more compatible with human capabilities and limitations.
Boudreau: Okay. And it used to be called human factors. Is that correct?
Cooke: Human factors is another word for it. We named our program human systems engineering, because we really take a systems approach to our work. We liked the word, systems, and we’re in an engineering college. So it was appropriate, but it’s really a synonym.
Boudreau: How did this field come into being?
Cooke: In World War I, there were a lot of accidents with fighter jets. A lot of them were because there were controls that looked the same, like levers that looked the same. But one of them let down the flaps and the other did something else. And so, the pilots would mistakenly pull the wrong lever. It soon became realized that we needed to do some work in this area. And so, human factors really was born in aviation in the wartime.
Boudreau: Okay. And recently, you gave a talk about how human factors can change the world in terms of social issues. Can you give us an example of that?
Cooke: Yeah. I think human factors is much bigger than when it was born back in the time of aviation, when we were dealing with planes. Then it soon became associated with knobs and dials. Nowadays, even people think that what we do is design office chairs. It’s much more than that. It’s much bigger than that. There are human factors people working in medicine, and in transportation, and in computing, user experience, so we’re all over the place.
But I think it’s even bigger than that. And that’s what my talk was about, that it’s about social systems. They are also systems that can be impacted by human factors because they are systems. I sometimes call them sociotechnical systems or systems of people in technology, and lots of it, working together. The example I used in my talk of this kind of social system transformation, that could be partly impacted by human factors, is the city of Medellín, Colombia.
I went there for a conference one year, and was taken on a tour of the city and told the story of this amazing transformation where this city was … well, it’s kind of the opposite of Beverly Hills. The poorest people live on the top of the mountain, and the kids build houses on top of the parents’ houses. They’re kind of stacked, barrio houses on the mountain. And they were geographically divided by the richer people on the bottom, in the valley, and by the culture and by the education.
And so, World Bank and others came together to try to address this divide. It was not only that, it was a killing capital of the world at one time, due to Pablo Escobar. And so not only did you have this geographical divide, but you had rampant murder, drug cartels, guerrilla warfare, and all of that. So, the people on the top of the hill were not happy. There was a lot of gang warfare as well.
So, the World Bank and others got together and decided to do a few things. They made a really rich transportation system, so cable cars to connect the top of the mountain to the bottom of the mountain. Even escalators, which interestingly, I saw dogs and cats using, that run up and down the mountain. So now, instead of taking two and a half hours of a walk to get down the mountain where your job is or where the grocery store is, it would take people five or 10 minutes. Now, there’s also a light rail system, and they started putting cultural centers up on top of the mountain. They built these library parks.
Boudreau: What’s a library park?
Cooke: Well, it looks kind of like a school, but it’s a community center where people can come together and you can have people of all ages. They can have events there. They can get on the internet. They can learn how to do resumes. And so, it’s bringing education and culture to the people who are on the top of the mountain. This changed things dramatically. The murder rate went way down. People started feeling good about themselves, who were on the top of the mountain. They started painting beautiful graffiti murals. And it’s really just transformed the whole city of Medellín.
Now, I tried to find out whether human factors was involved. But definitely, when you think about the transportation system, that’s a system of technology that unites people. They should have been involved. One group that was involved were the gang members themselves. They participated in this transformation. And so that tells me that they were taking the user into account, which is what we like to do in human factors.
Boudreau: Right. Okay. Your current research focuses on teamwork between robots, and humans and artificial intelligence. What would these kinds of teams be used for?
Cooke: Well, in all walks of life, from Amazon distribution centers, where you have people working alongside robots that are picking up boxes, to transportation systems, autonomous vehicles on the road, autonomous trucks, either carrying people, carrying goods, medicine, robotic surgery. Then in the military, all walks of life, you see humans and robots starting to come together — humans and AI agents starting to come together to do work.
Boudreau: So what are some of the biggest challenges in getting those teams to work effectively?
Cooke: Yeah, so just putting them in the room together is not enough. You need to really think about, what are the appropriate roles of each of the agents? Who does what best? We don’t want to replicate ourselves in robots, because we do some things very well. But there’s some things we can’t do, and that’s what the robot should be doing. We also need to figure out the best communication between robots, and humans and AI agents, what would work the best? Natural language may be overkill for some of what we need to do. So maybe we need something more like signaling, like in human-dog teaming.
Boudreau: So tell us a little bit about the research that you’re doing right now on teamwork between humans and robots and artificial intelligence.
Cooke: Okay. In one of my labs, we have a setup where we have typically, three people controlling a single, simulated, uninhabited aerial vehicle or an autonomous vehicle, so the controlling automation, basically. We’ve collected data in this lab on all human teams for many, many years, probably near 20. And in recent years, we’ve put an intelligent AI agent, developed by the Air Force research lab, into the pilot seat of that simulator. So we have now the pilot is a synthetic agent, working with two humans. One of them is a sensor operator or photographer and the other is basically a navigator.
The task requires them to interact, to communicate, to coordinate, to take pictures of ground targets. So that’s their job as a team and they get scored on how well they do that job as a team. It’s interesting to then compare the teams of all humans to teams of two humans with this one agent. The agent, because we wanted to see in this research, treat it kind of like a Turing test and to find out what are the essential aspects of teamwork that people may be privy to that the agent isn’t. And indeed the agent, its first rendition that is, turned out not to be a very good team player.
Boudreau: Why is that?
Cooke: The agent did not anticipate the information needs of others. So when people come onto a team and even into our lab, they know certain things about being on team. They know there are probably going to be other people there and that they probably are going to need something from those people. And in turn, they’re going to have to give something back to those people. The synthetic agent acted like it was the only agent in the room. And it constantly asked for the information that it needed and in order to get the information that the human teammates needed, they had to directly ask. So we say that the agent pulled more information than it pushed.
It didn’t anticipate the needs of others. Eventually when you get to be good in this task, you know that somebody is going to need this piece of information at a particular time and you give it in advance so they can have it when they need it. It did not do this.
Boudreau: So how did the people respond to that?
Cooke: The people, interestingly, started becoming more like the agent. They stopped sharing information, too. They stopped pushing information, too, and soon enough, everybody was pulling information. As a result, the team — the human-agent team — wasn’t very good, especially when it came to difficult situations or novel events, or we call them perturbations. They weren’t very coordinated or flexible or adaptive as the human teams were.
Boudreau: When you think about communication with AI too, I keep thinking about Siri or Alexa and I know some people will say, “Oh, Siri hates me.” And there’s this tendency to personify, I think, some of our technology. Do you see that when you see people working with artificial intelligence? Do you see people projecting sort of human qualities onto them and is that problematic or is that good?
Cooke: Yeah, so that’s interesting. We’re looking at that in one of our studies. It’s called anthropomorphism, projecting qualities of humans on people or qualities of humans onto these robots, these artificial entities. Yeah, and we see people doing it. In fact, I hear that soldiers sometimes don’t want to send their robots into harm’s way because they get kind of attached to them. So people do get attached. And I think that’s a little bit problematic in the case of humanoid kind of robots. And these are robots that have human characteristics, physically.
They’re kind of creepy because they’re not completely humans. You can’t fool someone that’s not a robot, but that might be problematic because although people may like it, it suggests that the robot has qualities that the robot may not have or has capabilities that the robot may not have, like to really sense your feelings or to understand emotion. And they’re working on that, too, AI that understands emotion. But I question why we need to be best friends with our robots.
Boudreau: At one point in our discussion, I think you mentioned that sometimes the human agents would start treating the AI badly. They would start just sort of barking commands at it and things like that. I’m a little bit curious. Is that necessarily a bad thing, if you know that you’re working with an AI, because you know that it’s not going to get its feelings hurt? Should humans have to modulate their behavior and treat AI more like people? Or do you think it’s okay that we treat different agents differently?
Cooke: Yeah, so with the anthropomorphizing thing, you are treating it more like people and in some cases, they don’t. So in this one experiment where I had three people come into the lab in that same stimulator for unmanned aerial vehicles, I told two of the people that they were interacting with an agent as a pilot when it was really just an unsuspecting participant. And they treated that participant differently. They barked more orders at it. They kept it out of the loop more. They didn’t really treat it as a teammate. And so what that tells me is that people aren’t ready to treat agents like that as teammates. They still want to do what we call supervisory control.
They want to be in charge and tell them what to do. We see that a lot across our different experiments, that people want to tell autonomy what to do and that will stand in the way of teaming. So that’s another issue that we have to deal with and a lot of it’s people’s attitudes and trust in the autonomy.
Another condition that we ran in that same experiment, we call the experimenter condition. So instead of putting the synthetic agent in the pilot seat, we put a very skilled experimenter from my lab in that seat, who knew how to do the pilot job. The humans in the experiment were told that they were working with a synthetic agent. But this experimenter synthetic agent would push and pull information, kind of model how to do that. So when the information wasn’t coming, that experimenter would ask for the information, kind of modeling how to do this coordination push and pull in a timely manner.
Those teams, in contrast to the agent teams, were very, very good. They were better than all-human teams and they were better at coordinating, more adaptive, just by this subtle pushing and pulling coaching that the experimenter was doing. So in that case, the two humans became more like the experimenter and entrained in that direction, showing that if you put one really good agent on a team, you can also raise the team up.
Boudreau: So if we had autonomous agents that were really, really good, people could learn from them potentially.
Cooke: Yes.
Boudreau: Do you think that the reverse could happen, that the AI could learn from people, if we can develop it well enough?
Cooke: Yeah well, that’s the idea with machine learning. So the AI could learn just like a person learns how to do a particular part of the team task.
Boudreau: Okay, changing course a little bit here. When we met previously, you mentioned having a pretty unique hobby of hot-air ballooning. Can you tell us how you got into that?
Cooke: Through my current husband, actually. He was at a hot-air balloon rally with one of my girlfriends and she was going to stop by my house, at the time, in New Mexico. It was on a weekend. We were all going to do something together and so I met him through her and started hot-air ballooning with him. I had my first hot-air balloon ride over White Sands National Monument and that was pretty incredible. And so I have been hooked ever since. I’m not a pilot, but I’m a crew.
Boudreau: Do you own your own balloon? I mean, how does that work?
Cooke: Yes, yes. In fact, it’s my husband’s balloon. He actually made the balloon. He has a friend in Georgia named Tarp Head, who is a manufacturer of balloons. He went to visit him years ago and made the basket, wove the basket and made the balloon.
Boudreau: Wow.
Cooke: He’s a good seamstress.
Boudreau: Wow, that’s amazing. And what struck me as interesting about it was when you were talking about it, you said that it is also a team activity, and I never thought about it in that light, but of course, with something that huge, you’re going to need your team. Can you tell us a little bit about how that works?
Cooke: Yeah. So you can’t do ballooning without a chase group because you need somebody to be there when you land. You need help assembling the balloon and taking it down, at least these larger balloons. There are one, single-person balloons, but with just a little chair. But this is not what I’m talking about. You need a team of people to assemble the balloon and it’s kind of an interesting team task because there’s a lot of noise. There’s a burner and a fan at one end of the balloon. The basket’s laid down when you’re trying to inflate it. The pilot is in the basket that’s overturned and you have a crown line at the other end, at the top of the balloon, trying to hold it steady while it’s inflating.
So there has to be some coordination between the person that’s at the top of the balloon and the pilot. They have to understand how things are supposed to work and how fast this is supposed to go up, because a lot of mistakes could be made. And the rest of the crew has to know their jobs, know what to do. It’s a pretty interesting team task, actually.
Boudreau: Do you have any great stories or interesting stories of times maybe the teamwork didn’t work out so well?
Cooke: Yeah, so this is kind of an interesting cultural story. We were ballooning in France, which is like a fantastic thing to do. I’ve done it a couple of times now, over the wine regions of France, go ballooning and land in farmer’s yards and things and have great after-ballooning parties with some of their homemade bread and cheese and wine. It’s fabulous.
Boudreau: Sounds it.
Cooke: At this one time, we had an all-French male crew and I had my six-month-old daughter with me in the chase vehicle and she was colicky. She is crying all the time and I don’t know French that well. So I had a French dictionary and I was trying to translate. And my husband’s in the balloon with a radio, talking to me. I’m translating back to the crew. I have a map that I’m looking at, talking to my husband about the map, trying to keep the colicky daughter settled down.
The French guys were yelling at me that we have to take the baby to a hospital. And I’m like, “No, no.” But then, when I said, “We need to go,” they said, “No, we’re going to have a picnic now.” And I said, “But you know, my husband’s telling us to go,” and they didn’t care. They also would go right when I said, “We need to go left here.” And I thought, “Maybe my French is just that bad.” It turned out that they really didn’t like taking orders from a woman.
Boudreau: Oh my gosh.
Cooke: So they were kind of read the riot act by my husband and then things got better and the baby didn’t go to a hospital and we all got along. They were very nice, just cultural differences and teamwork are kind of interesting.
Boudreau: Yeah. So coming back to teamwork with robots and AI, do you think that incorporating some of these artificial agents into teams will help with some of those biases and cultural differences? Or do you think that they might get incorporated into the AI as well?
Cooke: Yeah, that’s interesting, too. You could consider the AI or the robots themselves to have their own culture, that they maybe have their own kinds of characteristics. They’re certainly not always like humans. You can also see the AI learning from the human teammates, so it might pick up on some of the characteristics of those humans. In our lab, we see that. We call it entrainment, where we have an AI agent that’s not too much of a team player, really kind of selfish and doesn’t share information as much as it should. And sort of the opposite thing happens there. The humans start becoming like the synthetic agent. So they definitely can have an influence on a group. And it’s kind of scary to think about that, why the humans would model themselves after a synthetic agent.
Boudreau: That is interesting and I’m glad that you’re thinking about it. Well I appreciate you joining us today. Thank you very much for being here.
Cooke: Thanks for having me. I’ve enjoyed it.
Boudreau: If you’re interested in more from Nancy Cooke, watch the ASU KEDtalks video at research.asu.edu/kedtalks. Subscribe to our podcast through your favorite podcast directory and find us on Facebook and Twitter @ASUResearch.
Nancy Cook leads the Center for Human, Artificial Intelligence, and Robot Teaming (CHART), a unit of the Global Security Initiative. The Global Security Initiative is partially supported by Arizona’s Technology and Research Initiative Fund. TRIF investment has enabled hands-on training for tens of thousands of students across Arizona’s universities, thousands of scientific discoveries and patented technologies, and hundreds of new start-up companies. Publicly supported through voter approval, TRIF is an essential resource for growing Arizona’s economy and providing opportunities for Arizona residents to work, learn and thrive.