These days, amidst a great collective effort to reverse engineer innovation, everybody’s looking to model the success stories. Tales of disruption pepper our social media feeds, and we want the magic formula—the algorithm—for innovation.
While magic is tricky, success is even more deceptive. That’s because our measure of success, the objective, is “blind to the true stepping stones that must be crossed.” These are the words of Joel Lehman and Kenneth Stanley, the inventors of a breakthrough evolutionary algorithm for robotic neural nets, called novelty search.
What do robot brains and algorithms have to do with our current paradigm of innovation?
At the Evolutionary Complexity Research Group (EPlex) at the University of Central Florida, Lehman and Stanley programmed their AI to abandon their objectives and search for novelty, much like nature’s evolutionary “algorithm.” “Do something you’ve never done before,” they told the robots. They put them in a maze. Guess what? The robots with the novelty search algorithm got out of the maze faster than the ones armed with a plan and a list of best practices. In other words, objectives actually hindered their search. Freed from them, they stopped banging into walls and learned to walk. Are we so different?
Disruption and adaptation ensure the survival of a species, a business, or any agent in a complex system. A network takes in diversity and puts out emergence (the real hero of anyone’s innovation story).
Case in point: two artificial intelligence researchers who use evolution to program artificial neural networks that “learn,” and end up writing a book about Why Greatness Cannot Be Planned. Are we approaching innovation all wrong by holding it against too rigid standards?
So if you want to design for emergence, the scientists in our interview say, the name of the game is to be a treasure hunter. The path isn’t always clear until it’s behind you. Go where curiosity leads you in search of novelty, whatever seems interesting, and you’ll begin to collect the right “stepping stones” for that next big thing…
d4e: Ken Stanley and Joel Lehman, two AI scientists, wrote a book about Why Greatness Cannot Be Planned. How did that happen? (I’m guessing that wasn’t the plan.)
Ken: There are a ton of self-help books about how to pursue greatness and achieve your potential. A lot of it is speculative and philosophical. What’s unique about our perspective is that we’re offering hardcore scientific empirical research and experimentation that supports the approach that we’re advancing in the book. So people reading this book looking at these ideas can feel a certain level of confidence that they don’t normally feel about where these ideas come from: We weren’t trying to become self help gurus; we were doing experiments in artificial intelligence. We unexpectedly stumbled on the principles we describe in this book about why greatness cannot be planned.
d4e: The Chinese finger trap is a metaphor for innovation. Why?
Joel: In the Chinese finger trap, the steps that you need to take to solve the problem are exactly the ones you wouldn’t expect would lead to the solution. It’s a model of deception in innovation, in that making a breakthrough discovery often involves taking steps that are seemingly unrelated to the objective.
Ken: It’s the simplest example of this type of innovation process which we’re claiming is very common, where what you need to do looks like it’s exactly the opposite of what you want. It turns out you need to do exactly the opposite of what you think you should. The Chinese finger trap is designed to be deceptive in that way.
You have to push yourself more into the trap to get out of it. The problems of life are far more complex than that, though, so they’re going to be even worse than a Chinese finger trap in terms of being deceptive. If they weren’t, we would just solve all of them. In order to escape the Chinese finger traps of the world, we have to sometimes be willing to step into the unknown rather than go in the direction that’s obvious or “correct.”
d4e: Great invention is defined by the realization that its prerequisites are in place. Apple spends much less than its competitors on R&D. Do you think that those two ideas are related?
We could speculate that people put a lot of effort into pursuing an objective, and that can be very expensive, because maybe the right stepping stones just haven’t been laid. So you’re going to be grinding for a long time to create all the prerequisites you need to get this thing to work. Whereas if you take an unusual approach (and I would be willing to bet that Steve Jobs wasn’t very objective-driven) where you don’t follow an objective path, you can sometimes arrive somewhere interesting and valuable with a lot less effort than someone who is following an objective. People like Steve Jobs seem to have a knack for following those types of trails and taking the kinds of risks that are necessary, and saying, “Let’s just see where this leads.”
d4e: How did an algorithm change your life? Was it a eureka moment, or a slower evolution?
Ken: This question gets to the origins of the idea behind novelty search. There was actually a particular eureka moment before this algorithm that led to the novelty search algorithm, but also later there was the gradual dawning for both Joel and I, that the algorithm is really a way of thinking about life.
Before novelty search, there was an algorithm called Picbreeder, which is a website that we put up in our research group for people to come from the internet to breed pictures, and then publish them on the site. That sounds a little strange, but basically it means that you could come in and pick your favorite picture from a set, and it would have offspring. And the picture’s “children” would be slightly different from their parents — just like if you had children, they wouldn’t be exactly the same as you, but not completely different either.
These experiments exposed a flaw in the paradigm of “innovation through continual improvement.”
I had an experience playing with Picbreeder, where I started with an image that looked like an alien face. I was playing with the image, and it eventually bred into a car. This moment when the alien face turned into a car was the epiphany moment when I was struck with the realization that I had achieved something interesting without trying to achieve it. While it may sound trivial — after all, Picbreeder is just a toy — everything I’ve been taught for years in computer science said that the way you make computers do things — in fact the way we as humans generally do things — is to set your goals and somehow help the algorithm push the computer into the direction of achieving that goal. But this experience was so different than that.
I was breeding these pictures myself, but we have evolutionary algorithms that breed automatically as well, without human assistance. So I realized that this experience of achieving something without trying to achieve it probably has implications far beyond a picture breeding service. This led to the proposition that there could be an algorithm that doesn’t have a clear objective.
This is what I began to speak to Joel about before the novelty search algorithm was created.
d4e: So the idea of discovery without objectives led you and Joel to create the novelty search algorithm. You say that novelty search is paradoxical. How so?
Ken: The novelty search algorithm reflects the philosophy that sometimes you can discover things if you’re not looking for them. It gives the computer the ability to have serendipitous discovery but not necessarily be pigeonholed in the direction of trying to search for one thing and one thing only, or create one type of solution to a problem. Instead of a robot that has one type of walking gait, for example, maybe you have many.
We were playing with this for years, and it would constantly surprise us by doing things that people wouldn’t expect. You don’t tell the computer what to do, but it ends up solving your problem better than if you did. We saw this paradox over and over again. After a few years we realized that we were seeing was about more than a computer search algorithm.
The more I spoke about the algorithm at computer conferences, the more people would ask about things unrelated to computers, such as: What does it mean for my life if sometimes the best way to find something is to be not looking for it? Does this have any broader implications for how we run innovative cultural institutions? Or how we run science?
Or how about the way we support innovation in society?
It became apparent then that it is extremely important that we have this discussion as a society. If objectives are not always the way to guide innovation and scientific progress, then why is it that almost everything we do is objective-driven? That’s when we decided to write a book, because this kind of message is hard to get out in a computer science journal article aimed only at artificial intelligence. This is a much broader issue, in terms of how we foster innovation and treat objectives in our culture.
d4e: In your book, you ask us to imagine a cavernous warehouse of all possible discoveries. You say that “the structure of the search space is just plain weird.” Can you tell us what you mean by that?
Joel: The structure of the innovation space is weird in that it’s hard to predict where certain things will be. The linkages between different kinds of innovations are surprising. That relates to the broader area of serendipity in science or artistic realms, where you might inadvertently create the next big thing. A typical example is the vacuum tube, which was created as part of fundamental research into electricity. The person who was exploring that didn’t have the idea of a computer in mind. It just turned out that from this one point in space, from discovering a vacuum tube, you actually could reach computation.
Ken: Vacuum tubes facilitate computers, and that’s a connection that exists in this big ‘room’ of possible things. But who would ever know that? Somebody later picked up on it and said, “Now that this exists, now we can create this other thing.” There’s a lot of opportunity there for serendipity, in the sense that you wouldn’t even be working on vacuum tubes if your main interest was computation. Vacuum tubes don’t look like they have anything to do with computation. So in some way, to get all this stuff to exist, requires that people sometimes are not working intentionally on the ultimate achievement that stems from the effort that was put into this chain of events.
d4e: Order is important in search. How so?
Ken: When you first hear about novelty search, that we should search for things that are recognized for their novelty and ignore everything else, our intuition might say, “This is just random. How can that kind of search be beneficial?” I think people assume there’s some kind of coherent order that search induces. In other words, we assume that things get better as you continue to improve. That’s an order that we’ve come to expect from an objective — like if you’re trying to get better at school, your test scores will go up. We expect to start out low and get higher, and that’s the kind of order we’re comfortable with.
Whereas with novelty, it’s harder for us to think about what the order of occurrence is going to be, because we’re no longer talking about an objective metric. What we try to argue is that there is an order that’s inherent in a search for novelty — it’s just a different kind of order, one of increasing complexity.
Instead of increasing quality along some objective metric, novelty search basically creates a situation where if you continually try to do something new, you will quickly exhaust all the simple things there are to do. There are only so many simple ways to do things. By necessity, if you succeed in continually seeking novelty, things will have to become more complex over time.
When it comes to innovation, maybe we should loosen the reins just a bit and integrate some of the knowledge that we’re gaining in our scientific understanding of natural evolution.
At some point, somebody invented a wheel. Thousands of years later, someone was on the moon. Things don’t go in the other order. You don’t figure out how to go to the moon and then later come up with the wheel. So there is an order in innovative processes that are driven by invention rather than by trying to achieve a specific objective metric. And that order tends to be the increasing complexity. The reason I bring this up is that there’s good reason here to be confident that the search for novelty does have some kind of coherent principle, and it is anything but random. It’s just that it’s not following the order that we’re used to (of ‘worse to better’).
We wanted to suggest to our readers that going worse to better is actually not that principled, even if it makes you feel comfortable, because of the fact that it’s a mystery how to do it. We don’t necessarily know what the stepping stones are. So it’s really just a security blanket to say, “I’m going to keep on improving” if you don’t necessarily know how that’s going to happen.
d4e: The age of best practices is over. Would you agree with that?
Ken: There is room, despite everything we’ve said, for trying to improve. But we have to be clear about where that process is appropriate. If your aims are relatively modest, it can be entirely appropriate to just try to improve. If you just want to try to improve your lap time, that’s reasonable. But when it comes to fostering innovation on a larger scale, I’d be ok with endorsing the idea that the age is over, because we should have a revelation that simply trying to continually improve in an objective sense just doesn’t work.
There’s a great opportunity for a paradigm shift here. The amount of information we have now from artificial intelligence is starting to expose problems with the traditional view of achievement and innovation. Our book exists because we had the ability to do experiments that would have been impossible in the past. These experiments exposed a flaw in the paradigm of “innovation through continual improvement.”
Joel: And yet it seems that at the same time, the cultural crest is pushing more toward the paradigm of objectives and continual improvement. We have evidence that this isn’t how the world really works, especially in areas of innovation, discovery and creativity. It’s troubling that so many innovation endeavors are still ruled by objective-based approaches. When it comes to innovation, maybe we should loosen the reins just a bit and integrate some of the knowledge that we’re gaining in our scientific understanding of natural evolution, and how creativity works — and some of these insights come from artificial intelligence.
Ken: There should be a paradigm shift, but we wrote the book because there hasn’t been. This is a current argument about how we should approach innovation. When Joel says we run a lot of things in this very objective-driven way, that’s literally true. Look at what we’re doing in schools. The standardized testing craze is all about objective measurement, and it’s used for all kinds of things, not just for students. We basically say the school has to objectively improve on some metric, or the school gets penalized. It’s all based on objectives, and there’s a lot of discussion about whether that’s a good idea or not, but we’re not part of that debate explicitly.
Our work offers a different angle, which says that if you kept demanding higher scores, eventually everyone would get a 100. That looks like a pretty naive approach. There should be room for people to try new things — and that could lead to scores going down from time to time. If you always penalize for scores going down, then none of those things become possible.
In the world of science funding, one of the things you almost have to do to get money for research is to state your objectives. We’re running our entire federally funded scientific enterprise — really, billions of dollars — based almost entirely on objectives. You can hardly get your word in if you don’t state in the beginning what you’re trying to achieve. It’s not common sense; it’s a problem.
d4e: There’s a book called Why A Students Work for C Students. How does that relate to this philosophy?
Ken: I haven’t read that book, and I think it’s obvious that that’s not always the case — there are plenty of A students who are the bosses of C students. But that’s an interesting question. You could imagine there’s a connection there in that somebody might assume that if you get A’s that’s the correct goal for getting to the top of the heap in some organization. In reality, often it’s the case that the route to success is more circuitous. It may be that the C student was more willing to take risks that the A student just didn’t take because the A student was so single-mindedly focused on doing what everyone says you’re supposed to do in order to be successful.
d4e: Objectively speaking, unstructured play can be bad for us as individual adults, but good for us as a society. True or false?
Ken: I would say false, because I think it can be a good thing for individuals and society. Unstructured play can be risky, though. It may lead to no particular advance to the individual; on the other hand, it may lead to something great. You just can’t be sure. You may have a hobby, and pursuing that interest may just be “play” for you, but it could end up being the stepping stone to your next great achievement.
And of course I’m totally in agreement with the idea that it’s also beneficial to society, because we need people to pursue their passions and try the things that other people wouldn’t necessarily try, so that they can build the stepping stones for others to follow.
Everybody can benefit, but we have to just accept that anything unstructured has risk. That’s why we tend to be against this kind of approach to life as a policy matter: we like to control things with standards and objectives and metrics, because we’re afraid of risk, ultimately. At the same time, you have to take risks in order to have great achievements in the end.
d4e: Let’s say I run a venture capitalist firm. How should I go about building a portfolio of startup investments?
Ken: I think venture capitalists actually put the ideas in our book into practice in a better way than a lot of other areas in society because they understand the value of a portfolio: Not all of your bets need to pay off. Just some of them need to pay off. VC’s are willing to go in some very exploratory, risky directions. If you have one big hit, it can make up for all the ones that didn’t pan out. This is, I think, a pretty good lesson for society in general. In a lot of our institutions we guard against failure as if it’s some kind of pathology to make a mistake. Venture capitalists have good instincts and are willing to have failures, and that allows them to search in a less objective way. I think we would find that the most successful venture capitalists are less objective about their portfolios.
d4e: You don’t seem to dwell much on the concept of probability. Don’t you like it?
Ken: The book isn’t really about probability, but I think we would endorse probability as an important concept. We see its importance in our field of machine learning and artificial intelligence. The point that’s being made in the book is largely independent of an in-depth discussion of probability, although it factors in to risk.
Any individual discovery could be regarded as highly improbable. In innovative processes, the likelihood of making a particular discovery is unpredictable. And yet, overall,you can increase your ability to make discoveries and the probability that you’ll make some interesting discovery.
d4e: You say that novelty is information-rich. What did you mean by that?
Joel: One way to look at novelty is that it’s information based on not where you’re trying to go, but where you’ve been in the past. In some sense, it can be seen as more information-rich than taking an objective driven approach, in that you completely know where you’ve been in the past, and so that’s more certain. When you say ‘this is novel,’ you can have confidence that it actually is new. Whereas if you’re trying to take a step along the way to your potential objective, you have to be willing to be uncertain, because you really don’t know if that’s going to be a stepping stone toward your goal.
More than that, the idea of being genuinely different often requires some sort of conceptual advance. You can imagine, for example, being on a skateboard. Who’s going to be more likely to create a novel skateboard move? Will it be me, who’s likely to fall on my butt, or will it be Tony Hawk, who has all this knowledge and experience to create something genuinely new? There is some ability, knowledge, or talent that’s required to create something that’s genuinely new. In that way it’s also a source of information.
d4e: Is it possible that there’s a historical trend toward us wanting more certainty? And if so, is the value of novelty rising or falling?
Ken: I think that novelty has always been valuable. What’s happening is that because of things like the internet, there’s now a significantly greater potential for the creation and dissemination of novelty. We’re exposed to much more novelty in a short time than we used to be, because the network has created this capacity to expose people to new ideas almost instantaneously and from enormous numbers of different people. That means that it’s going to accelerate the production of novelty, and we’re all going to be exposed to more, and that’s a feedback cycle. Now that there’s more novelty around, there are more stepping stones, and so more people will create novelty.
There’s a tendency to trap people in the things that they’re comfortable with, and as long as that’s making money, everybody’s happy. But that doesn’t produce the stepping stones we need for innovation.
d4e: What about machine learning and the curation of information? What about phenomena like the popularity of the Kardashians? Aren’t we suppressing novelty?
Ken: Because computers are making decisions for us about what we look at, and those decisions might cause us to not be exposed to interesting things?
d4e: Right, like the rich get richer effect. The more that machines learn our preferences, the more they are fed back to us.
Ken: I think there is that risk. We have to guard against always being given just more of what we want, what we are already comfortable with. I’m pretty optimistic about human nature and its ability to get around the tendency toward convergence. Certainly I think the algorithms will play a role in that too. Algorithms like novelty search can give us a bit of a clue about how to create computer algorithms that are not so convergent that they just always push you in some predetermined direction.
We’re exposed to much more novelty in a short time than we used to be, because the network has created this capacity to expose people to new ideas almost instantaneously and from enormous numbers of different people.
In general, we like to be exposed to stuff that’s unexpected. And we see that there’s been some attempt to do that in services like YouTube, for example. On the homepage they try to expose you to things you weren’t searching for. Of course they may base it on things you’ve searched for in the past, so there’s a bit of a paradox there.
It’s in the interest of anyone running a business to hook people into new things. People are trying to do that, with algorithms, but at the same time, the danger you’re identifying is real, and we should be cautious about it — because there’s a tendency to trap people in the things that they’re comfortable with, and as long as that’s making money, everybody’s happy. But that doesn’t produce the stepping stones we need for innovation.
Joel: One potential danger with some of these algorithms is that they can get very good at providing us with trivial novelties — novelties that are just some modulation of some formula. “The top 10 X, Y or Z.” It fulfills a very basic human desire for novelty, at a very trivial, unfulfilling level. Maybe over time people will become more aware that they’re being exploited by these algorithms. Like Ken, I’m optimistic about humanity’s ability to adapt to technologies. But it is worrisome that this very human desire for novelty can be undermined by clickbait.
d4e: Will there be enough competition in artificial intelligence for robots to evolve, given that some firms may dominate development?
Ken: These kinds of endeavors can become rather objective when a dominant firm has set the standard for success. It does potentially dampen the ability to try new things. Something really novel might not look as good. Someone might say “Our way of doing things is the objectively superior way; these other approaches are inferior objectively, and you shouldn’t invest in those.” I think that’s a problem, and we are suffering from it right now. There is a belief that there’s a canonical approach that works really well, and therefore other things should be relegated to obscurity. To shed some daylight to some of these less conventional approaches would help foster diversification. Of course, the people still need to be experts. We’re not saying that any idea off the street is worth millions of dollars; but if an expert has an unconventional idea that looks interesting, let’s give it a try.
d4e: Making distant associations and unlikely connections within the network is, to me, crucial to innovation. For us, these processes are often subconscious. Will AI have a subconscious?
Ken: I think that’s on the minds of people in the field. Generally, people in machine learning are concerned with what you’re describing as a subconscious process — the ability to make deep, subtle connections. That’s probably a little bit ahead of where the field is at the moment in terms of making those connections through algorithms on computers, although there’s certainly work being done in that direction. Anything that’s interesting about the human intellect is fair game for AI.
Joel Lehman is an assistant professor at the IT University of Copenhagen. His research goal is to write algorithms inspired by natural evolution that continually produce a wild diversity of interesting and complex things.
Kenneth O. Stanley is the director of the Evolutionary Complexity (EPlex) Research Group at UCF. Our research focuses on abstracting the essential properties of natural evolution that made it possible to discover astronomically complex structures such as the human brain. Our work is in part an approach to artificial intelligence.