Attraction, Deception, and Sacrifice
Ronald Arkin
When robotics was a young field several decades ago, many of us were working on simply trying to get a machine to intelligently move across a room without crashing into anything. Around that time, I started exploring the idea of teams of robots working together in a cooperative way.
I encountered lots of skeptics who often asked, “How can you work with groups of these machines when it’s still a challenge to simply get one to behave intelligently?” Undaunted and federally funded, I edged forward.
I’ve always relied heavily on biology for inspiration, and that is where I found my answer—that paradigm shift—identifying that what we needed was multiple team members able to provide distributed sensing, acting and computing in truly novel ways compared with single robots. The hard part was to make it work. How do you make robots move across an area as a team, maintaining consistent positions relative one to another?
Looking at bird flocking and sheep herding, we discovered underlying mathematical behavioral models in the way the agents (biological or robotic) could be attracted to each other and a destination point while being repelled by obstacles on the way. We studied not only robots that are similar to each other but those that are different, using various biological models such as bird lekking (a kind of tailgating party by prairie chickens); bird mobbing (where groups of birds such as the Arabian babbler attack predators who threaten them); and wolf pack behavior (analyzing the various stages of the hunt and incorporating the roles of young pack wolves and older, heavier members while valuing the importance of each in the pack).
We also considered how humans and robots work and play together as a team, endowing them with artificial emotions to permit people to better relate to these machines. With Sony, for example, we did collaborative research for the AIBO robotic dog and QRIO, a small humanoid. With Samsung, we looked at operations in search-and-rescue situations, trying to determine the best ways for robots to act to ensure cooperation of human partners through trust and affective demonstration.
More recently, I have considered the interaction between disparate members of a team with respect to robotic deception—trying to draw from human psychology, as well as the common squirrel, to learn when it is best for a robot to deceive another human or robotic agent and just how to do it. In the near future, we plan on exploring altruism and mutuality—when should an agent sacrifice its resources or even its existence on behalf of other team members?
As robots become ubiquitous in our military, homes and industries, questions surrounding teamwork become ever more pressing and ethical in nature, especially considering that teamwork involves not just robots working together or people working together, but robots and humans sharing their tasks and experiences.
This interview was originally published in the June-December 2014 issue of The Henry Ford Magazine.