Artificial intelligence, machine learning, discovery algorithms… At this stage of human development we find ourselves gazing at a fantastic horizon, one where we have the opportunity to design intelligent entities — robot brains. Will we design them in the image of our own neural networks? And if so, what’s the most important decision making faculty in a human — the conscious mind, or the subconscious? In today’s research landscape, scientists are giving robots resilience and adaptability, deep learning and long-term modular memory. They’re asking AI to perform complex tasks not so much with efficiency in mind as novelty. That’s right — we’re breeding creative robots. And guess what: it may change how we think about not only evolution, but the way we ourselves, as a network of individuals and organizations, approach innovation. Will we design for efficiency, our minds on reaching a fixed set of desired objectives — or will we design for resilience, adaptability… emergence?
We spoke with Jeff Clune of the Evolving AI Lab about how some of the emergent effects of machine learning are taking AI in a direction we didn’t expect… and what it could potentially teach us about ourselves. Jeff’s work has not gone unnoticed. He’s been cited in Wired, the Atlantic, Nature, and Scientific American, to name a few. Maybe he’s the type of AI researcher we really need to figure out our place in an emergent robot world: a wide-ranging thinker with two Philosophy degrees advancing the cutting edge of machine intelligence.
d4e: Take us on a quick tour of the AI research ecosystem. What are the main species, and what species are you?
Jeff Clune: There are many different types of AI. The type that was dominant in the early decades was logic-based and symbolic process-based. This involved a lot of hand-programmed rules of logic that AI machines would learn to chain together to come up with deductions, such as “If A=B and B=C then A=C.” People thought that these expert systems or large sets of rules could do things like inference and deduction. History has so far mostly shown that that’s not been the case, that it’s hard to get a system of rigid or even fuzzy rules to do things anywhere near what a human can do in terms of recognizing patterns in data.
The modern school of thought is more statistical in that we come up with algorithms that make probabilistic inferences about data, and they learn from data, and so these are techniques like Bayesian-based artificial intelligence, modern machine learning, deep neural networks and the like. They do really well when you give them a whole ton of data to learn from. It’s increasingly obvious that the more data you have, the more computing you have, and the better these systems do. Right now the race is on to build faster computers and larger data sets, to allow them to learn from. At a very high level those are the two main schools.
There’s a third camp out there, one that uses evolution and is inspired by evolution. It’s another branch of AI called optimization, in which you are trying to find something very impressive in a sea of unimpressive things. You may be trying to find a very well designed airplane wing that has a lot of lift, drag, and that is lightweight and cost effective to build. Or you might be looking for a set of timings in street lights in a major city that allows traffic to flow through; or you might be looking at which neurons are connected to which neurons and how strongly, in a giant neural network, so that you’re trying to evolve artificial intelligence.
d4e: And do all of these camps want robots to play?
JC: That’s an interesting question. I think at the highest level, the field of artificial intelligence, you could see one of its goals as being very practical and trying to solve problems for the human space and make human lives better, such as automatically recognizing tumors in medical images, or having a robot butler that could take care of a handicap person and help them live a fulfilling life on their own.
But at the end of the day, the reason many AI researchers became fascinated with the field in the first place, and what really put the glint in their eye over the long term, is indeed creating a robot that can play, a robot that’s as curious, capable, intelligent, and agile as a human child, a human adult — or even a superhuman intelligence. I think that most AI researchers, even if they don’t want to admit it, have a little bit of that mad scientist in them that wants to create a new form of life that asks us new types of questions and expands our understanding of the possibilities of consciousness and intelligence.
d4e: Your work tries to give robots intuition, and it looks into the structural organization of human brains. How does intuition work in humans? Stick to that one for now, and I’ll ask about robots in a minute.
JC: I don’t really know how intuition works in humans. I wish I did. I think the best part of the mystery of why humans can do what they do with their brains is that we’re extremely smart, and we can puzzle things out, and we have intuitions about what might be the right answer or the right path forward. That is a brain integrating a myriad number of different inputs, signals and pieces of information together to form a complex answer to a complex problem or at least a suggestion for a complex solution to a complex problem.
The word “instinct” suggests that it’s not the rational part of our consciousness, but that it’s kind of bubbling up from below. And there’s lots of our intelligence that is below the conscious surface and that is extremely mysterious. We don’t know exactly how it works, although we’re starting to create artificial intelligence agents that might be capable of doing that. If not now, then soon. But also in that case we probably won’t and they probably won’t understand exactly how their intuitions are formed.
d4e: How do you think that intuition does or could work in robots?
JC: Well, the simplest answer is that I believe we will be able to produce robots that are as intelligent as us, and in many ways mirror our own intelligence, if we want to. We should and probably will use computational brains that function very similarly to human brains. And therefore, since we have intuitions they’ll have intuitions.
So the path forward, while I don’t know exactly how to do it and nobody does… If we want robots to have intuition, is to continue to build them to be like us, because then they’ll almost certainly have it, because we do. Even if we build more strange, exotic alien forms of computational intelligence, they will have intuitions too, because that’s probably like a general property of intelligence — or at least a property of many types of intelligence. So that will be interesting because then you’ll have an entirely different form of intelligence that we’ve never encountered before in the universe, that has an entirely different form and set of intuitions. Maybe that could help us solve complex problems or come up with new forms of art, or maybe it could bring about problems, such as war.
d4e: What do you think is the more important human adaptation: the conscious mind, or the subconscious?
JC: That is a good question. I would say both. It’s hard to say which one’s most important. They’re probably both necessary for what makes humans so special. Clearly a lot of our thinking and a lot of what’s so impressive about us is below the conscious surface. We do all sorts of things without being consciously aware of them, including solving very difficult math and scientific problems. If you talk to any scientist, they’ll definitely mention to you that they thought and thought about a problem and could not solve it. And then they went off and did the dishes and let their mind wander and went to sleep, and all of a sudden the answer popped up for them while they weren’t consciously thinking about it.
The subconscious brain is terribly important. It’s also necessary for more simple things: it keeps us breathing and keeps our heart pumping. The conscious brain on top of it is also extremely impressive. It gives us language and conscious reasoning and the like.
A somewhat interesting question is: How much of what makes humans uniquely intelligent is subconscious?
For example, I would suspect that a lot of very powerful and intelligent things are happening subconsciously in humans that don’t happen, say, in chimpanzees, but that still separate us and make us capable of all the amazing things we are capable of. So, to some extent, there’s probably some human-specific subconscious thinking that’s impressive, but also some specific conscious processing, such as, if I sit down with a pen and paper and solve a math problem, or ask, “Why do I exist?”
d4e: Robots can learn from other robots by sharing algorithms and data. Is this the same thing as robot interaction? Will we get to the point where feedback between them produces social behavior and emergence (like in the movie Her)?
JC: There are many people who have shown that that already takes place. They have robots interacting with each other, sharing information with each other, and they would claim that they have shown rudimentary social interactions. So, the short answer is, that is already taking place. But so far those interactions are relatively simple, and brains behind the robots are relatively simple, so we don’t yet have a group of robots that have decided to go off and explore — I guess I can’t say humanity, but — their robot psyche, collectively. And they’re not having sophisticated conversations with each other. But sure — you can get emergence of complex behavior that is very difficult to predict. Sometimes that behavior is fascinating, and sometimes it looks like a bunch of humans or animals interacting: You can get flocking behavior, schooling behavior like you would see in a school of fish, etc.
Yes, currently there’s already social interaction between robots, but it’s more of a theater for scientists than true social interactions that would fascinate sociologists, although I’m sure there are people in the field who would disagree with that. But certainly as robots become more complex and advanced we will see all of the complexities of social interactions that we find in humans, gorillas, and dolphins.
d4e: There’s a field called evo devo [evolutionary developmental biology]. What is it, and how does it relate to your AI work?
JC: People like myself who are interested in how evolution produced all of the complexity on earth — one of the things we want to know is what were the tricks that really enabled evolution to produce jaguars, hawks, and the human brain? What are the secret ingredients that drive the evolution of complexity in the natural world? We can study those by creating simulations of evolution and then trying to add a few of those ingredients and seeing what happens. Do you suddenly start to get much more complex intelligent things evolving in your virtual world?
One of the most impressive things that happens in nature is developmental biology. Which means that I could take one cell that has a little bit of software encoded in molecules called DNA, put that in the right environment, and it will grow into a snake, or a turtle, or a whale, or a 3-toed sloth or a human. It does that with self-assembling nanomachines, that collect resources from the environment and builds a scaffold, puts more in and builds another scaffold, up to something as big as a blue whale. If you didn’t already know about that and I told you about it, you might think it’s amazing fantastical science fiction, and it is science fiction from the perspective of what we are capable of as engineers, but nature does this everyday.
We’re fascinated by the power of development. How can you have software that knows how to express all of the complexity of a human or a blue whale? If you think about it, every cell in your body is running the exact same software, the same DNA. Yet some of your cells know how to become spleen cells, heart cells, skin cells, eye cells… It’s a fantastically complicated specification and recipe for building a robot, effectively. A biological robot.
People in the field of evo devo say that developmental biology has a lot of the power that we want to port over to robotics. Because what we’d ultimately like is for one piece of software to specify the design of a fantastically complicated machine such as a human or a jaguar. How does development accomplish that feat, and can we give computational evolution some of that power? What we’ve discovered in the last decade is that there is a way (and this was invented by Ken Stanley who you’ll be talking to later) there’s a very clever way to abstract a lot of the power of what biology is doing without paying a lot of the cost of simulating the millions of cells interacting with each other and the chemicals that pass between them. That expressive power is captured in an idea Ken invented called the CPPN — compositional pattern producing network — and what we see is if you launch evolution with that kind of abstraction or capturing of developmental biology, you start to see all these amazing things pop out. Pictures that look like butterflies, mushrooms, or three dimensional shapes that look like faces, or hawks. Or even engineered designs such as chess pieces or buildings. Ken and company have captured some of the power of developmental biology, and have got it working inside of computational evolution. Almost everything we try that algorithm on does amazing things in terms of producing complicated natural looking, impressive artifacts and pictures, but also brains.
So, that same idea can specify structures and wire up the structure of artificial neural networks, which are computational brains. My dissertation focused on that and how you get these nice, regular wiring patterns such as left-right symmetry and repeating themes, so that you go out and can perform much more impressive tasks when the brain is allowed to start solving problems.
You can think of it as almost like computational DNA, but it’s computational DNA that does its thing in a similar way to developmental biology, and therefore it’s much more powerful than previous attempts at creating computer DNA.
d4e: What do you want out of robots, ultimately? Machines will be faster and better at certain tasks — we’re historically been focused on efficiency at tasks. Nice, we’ll have robot maids. But you’re focused on their evolutionary process, and we humans hate boredom and love to discover. Can AI enable new human behaviors in the same way that the Internet enabled social behaviors that we didn’t have before? How will AI produce novelty for us?
There are so many things that I want robots to do that it would be difficult to list them all. I want them to do everything from mundane work such as putting away my dishes and tidying up after my child, to showing me new ways of looking at the universe and teaching me alternative interpretations of experiences that I see.
I also want them to expand the set of things that we’re able to see. Right now humans are trapped on this planet. If we could program and build robots and send them out into the universe and the solar system to explore distant planets, then we could have a point of view perspective on Pluto and Jupiter, and not through the limited lens of a wheeled robot that doesn’t have arms, but through a humanoid form that could pick up rocks, that could play baseball on Pluto, that could throw a frisbee on Mars.
When most people think of robots, they think of the physical body. And there’s fantastical and interesting things that could happen when robots can swim around on sea floors, and explore distant planets. But also, true robotics is going to require true artificial intelligence. And it’s just as interesting to think about having a conversation with an AI as it is meeting an alien race for the first time.
Imagine if aliens showed up tomorrow, or you knew they were coming in 15 years. What would you say to them? What would you ask them? What conversation would you have over tea, or whatever the aliens brought to drink? That is exactly the prospect that faces humanity because of artificial intelligence research. It will take a little while, but we ultimately will create completely bizarre and alternate forms of intelligence — that may resemble human thinking, and may in fact be very exotic and different in terms of how they think about the world and see the world, and what they could explain to us. It’s also possible that this artificial intelligence will be much smarter than us, and if that’s true, then effectively, that means we can go ask a smarter entity questions about the world. Questions that have puzzled us for a very long time. And who knows what those answers will be?
It’s almost like you could imagine a really excited, curious child who grew up on a desert island all by themselves with a small library of books and has way more questions than answers, and finally a boat shows up and says, we can take you to a university, such as Oxford or Cambridge or even the University of Wyoming, and you get to talk to all the experts who’ve been studying all the things that you’ve been wondering about — and finally you get to understand physics and biology and chemistry, and aeronautics. And how wonderful that would be for that young child! That could be what’s in the future for humanity — finally talking to entities that just get it, that can look at reams and reams of data and detect patterns that we didn’t notice, detect new laws of the universe that explain the phenomena we see.
d4e: As parents we always hope our children will be smarter than we are.
JC: That’s right. There were neanderthals, hominids, and anthropithecus robustus, and all these previous versions of humans. Not all of those are exactly in our lineage, but you get the idea. Maybe we create a new form of intelligence that’s smarter than us and that sees us as primitive. That’s exciting, and it’s terrifying, but it’s certainly interesting.
d4e: All of the above. We don’t know what goes on in human minds. It seems we also don’t know what goes on in neural net’s minds. Is this just emergence at work?
JC: It is. So, just as if you asked me, “Why did you throw that water balloon at that child when you were 12?” Or, “Why did you decide to go into that bar vs this bar?” I can’t always give you an answer. Human psychology is very difficult, even introspection is difficult. I do all sorts of things I can’t explain, even to myself. Brains are complicated. Increasingly, artificially intelligent computational brains are also complicated. They learn things that we don’t know and fully understand, in the same sense that you might raise a child, and they go off to college and come back with new knowledge that you don’t understand and do things that you don’t get. No amount of current technology would help you get to the bottom of why your child did X & Y. That’s partially true of neural nets as well. They’re really complicated. There’s millions if not billions of interacting parts. And it’s very difficult to understand what’s going on in there. We’re making progress and trying to shine some light into these computational brains. It’s almost like the machine learning version of neuroscience. Just that neuroscientists are trying to shine light, literally, into the brains of humans and animals to try to figure out how they tick.
The more complicated the AI we build, the more opaque it becomes, even to us, the builders, and so it becomes interesting and challenging to understand what it’s learned and why it’s making the decisions it makes.
d4e: So I have to know, was your image recognition experiment a type of Rorschach test for neural nets? What did you discover about their “psyches”?
Editor’s Note: The “image recognition experiment” we’re referring to was a paper titled “Deep Neural Networks Are Easily Fooled,” in which Evolving AI Lab’s researchers found that deep neural networks trained in image recognition come to some very wrong conclusions about the images they’re asked to identify. The visual results of what the AI “thought” it was seeing tend to look like abstract art or hallucinations. We wrote about this phenomenon in “To Err Is Divine: Deep Learning and the Art of Machine Mistakes“, and you can find a video summary of the paper here.
JC: That experiment was interestingly enough a bit of scientific serendipity. We didn’t expect it to fail in the way that it did; we actually expected it to succeed. And when we started seeing these extreme examples of failure, where it just couldn’t be more wrong about what it was saying, we said we need to stop and share these results with the community, especially because a lot of people are actually starting to build commercial applications and security applications based on this technology. As a side note what’s been fascinating is right now my inbox is filled with papers that all are following up on that original work and trying to fix the problem.
But, what we really wanted to do in that experiment was use artificial intelligence to further the creation of artificial intelligence.
So we were basically trying to say humans don’t always have the ability or time to sit down and judge if something that AI just built is good or not. We wanted to use modern AI as the judge. So we would have an AI “engineer” or artist that makes things, and an AI critic or judge that would evaluate these things and say if they’re good or not. And we would play these off each other in an arms race to try to produce amazing art or engineering solutions.
And since that paper, we’ve actually shown that that works and we had a paper that won a best paper award at a conference this summer. En route to that paper, we had to stop and show the bizarre results that sometimes that judge just completely fails and starts spouting nonsense.
Why did these failed image recognition experiments produce such amazingly interesting artifacts? Ken Stanley and Joel Lehman (who you’re going to talk to) are some of the guys who pioneered a lot of the thinking for why that’s true.
The overall idea is that if you try really really hard to accomplish a goal, you won’t make it.
And it sounds paradoxical. But if the goal is relatively simple, like I want to climb that mountain right there, or I want to invent a slightly better watch than the one I have, you can do that. You can focus on that goal and you’ll get there. But if your goals are truly ambitious, such as making a time machine, if you set out to make that time machine you won’t know which direction to head in so you won’t make any progress.
In the same sense that 500 or 1000 years ago, if people had tried to make a device that could do long-distance communication around the world with no wires, then they would’ve sat there and tried to make devices that did only long distance communication. But they never would’ve built the cell phone. Because the cell phone required computer chips, and rubber, and wires, and plastic, and a network that was built for something else, etc. And so the overall idea is that to achieve truly ambitions, excellent things, you can’t have that goal in mind and only try to improve with respect to that specific goal. Instead, you kind of have to be opportunistic, like a scientist recognizing serendipity, and say, “I’ll take anything that’s interesting. I don’t care what the future uses for it are or where they’ll lead. I just want to collect interesting stepping stones to everywhere.” And if you continue with that process for a long time, you’ll eventually get your time machine (if it is possible to make one), and you’ll get your cell phone, and you’ll do marvelous things.
And that’s what human culture does. If you look at modern authors, they’ve read the great works of literature that have come before, and they’ll either try to improve within a style or genre, or they’ll mix genres together to come up with an entirely new one. The same is happening in science, and in evolution. You have a whole bunch of existing species, and then mutations in those genomes, mixed with ecological opportunity, provides an opportunity to improve on that species or create a new species that can invade an entirely new niche or ecosystem. That is essentially the theory we were trying to go after with this discovery. What we wanted was a generating system that’s trying to evolve things, and then we were going to use the deep neural nets to evaluate these things based on whether they’re interesting or not.
Halfway through the experiment, we sometimes stopped and decided the judge was doing a good job and it was producing some nice art, but other times it would come up with these really wacky answers, calling this static a jaguar with confidence or calling that an electric guitar. But there were these very strange, interesting artifacts that a human would never come up with.
A lot of people approached me after that work and said, “How did you think to do that? Amazing, but I never would have thought to try that.” It was total serendipity. And the answer was, we weren’t trying to do that. I guess we were clever enough to recognize it was an important result when we saw it right in front of us.
d4e: It almost sounds like you’re teaching robots to be design thinkers.
JC: That’s exactly right. We want robots and evolution to act like a curious scientist that’s willing to play with a whole bunch of ideas and recognize serendipity when it sees it. If something new and interesting or beautiful or functional comes around, even if we weren’t looking for that type of thing, if you can recognize serendipity on the wing or chance when it comes to you, you can say, “That’s interesting,” and pick that up and play with it a while and see where it takes you. We are trying to basically bottle serendipity.
d4e: You seem to be into adventure sports. Did this give you some insight into risk and resilience that you took into your research? Have you broken any limbs?
JC: I don’t know that I’ve ported any insights from adventure sports into AI, with the possible exception of– I’ll take that back– where sometimes the rational thinking on top of your brain can get in the way. I’ve been in very dangerous situations before where my rational brain was preventing me from acting, because it was overthinking or too busy worried about the possible downsides. And I’ve literally felt my subconscious brain seize control and just make my body act with muscle memory and get me out of a jam. That has saved my life on more than one occasion. And I haven’t broken any limbs, but I have used all of my 9 lives. And it’s only luck that allows me to sit here and talk to you.
d4e: Thanks Jeff, may you always land on your feet.
Our thanks to Jeff Clune & the Evolving AI Lab.
This episode was recorded and mixed by The Werd Company, and featured music by VVV, available on Soundcloud.
We have an interview with Kenneth Stanley, co-author of Why Greatness Cannot Be Planned, and the inventor, with Joel Lehman, of the novelty search algorithm. Check it out!