In the decades leading up to the turn of the 18th century, our fundamental understanding of the universe was undergoing extraordinarily transformative changes. The ancient atomism of Democritus and Lucretius, which had for the first time cast our entire world as a collection of tiny particles in constant motion, had miraculously survived the Dark Ages and spread across Western Europe and into the minds of intellectual giants like Thomas Hobbes, René Descartes, Robert Hooke, and Isaac Newton. In the flurry of monumental works that appeared during this time — from the Principia andLeviathan to Micrographia and Philosophy of Mind — we began to see fleeting references to reincarnations of the atomist’s dreams, but they were painted in different hues that suggested clockwork machinery and the fledgling industrial age.This was the mechanical universe, an idea so far-reaching that it would prompt sweeping changes in the sciences, religion, the arts, and even the structure of our society.
To understand what it means, we look to a mathematician named Pierre-Simon Laplace and his demon. In a thought experiment, Laplace imagined a creature that — if given the wherewithal to know the present state of the entire universe and all the laws that govern it — could perfectly predict the future states of the universe (Laplace, 1902). This demon could see the universe as a complex machine. If he could only break this machine down into its constituent parts, understand them in turn, and reason about the laws that determine their motion, he could then foresee the outcomes of interactions of the individual parts. So it was that the enlightenment thinkers began to take apart the world and put it back together as an intricate menagerie of mechanistic oddities.
This way of thinking turned out to be incredibly useful. A panoply of practical inventions and mechanical novelties captured the imaginations of laymen around the world. Newton was hailed as a scientific demigod and his theory of gravitation seemed to always give the right answer. He had captured the whims of Mother Nature in a set of elegant but simple physical laws that inspired generations of scientists, but Newton himself was plagued by a sense of unease. He recognized in his own equations an absurdity, namely the assumption that material bodies can interact without touching. This concept was known as action at a distance and the mystery surrounding it would not be resolved until an idiosyncratic Swiss patent clerk tried his hand at a new theory of gravity more than two centuries later.
Despite the uneasiness in the underlying physics, the mechanistic interpretation of the world continued to drive innovation and discovery. The mechanical wonders of the 19th century culminated in machines that could change their behavior based on inputs. Elaborate inventions like the Jacquard loom, the player piano, and Charles Babbage’s uncompleted Analytical Engine were some of the first programmable machines that would lead, in the following century, to the development of a general-purpose computer.
The monstrous computing machines that were designed in the first half of the 20th century would not have disappointed Babbage. From the vacuum tube designs like ENIAC to the first transistor-based supercomputers of Seymour Cray, the power and utility of computation was on the rise, due in no small part to the tremendous efforts to win the Second World War.
Meanwhile, the father of computer science, Alan Turing, having already presented a mathematical basis for computation itself, began to apply his insights to biology. He wanted to explain the formation of biological patterns in nature like the stripes on a tiger’s coat or the spots on a ladybug. In a landmark paper, he described how several simple chemical interactions could lead to complex, large-scale patterns (Turing, 1952). At the same time the precursors to Neural Networks were actively being developed (McCulloch, Pitts, 1943) and John von Neumann, a genius of similar stature in the computing world, began to likewise draw some bold analogies between biology and computation with his series of lectures on the human brain and the computer (von Neumann, 1958). For some time, however, the connections between these two fields would remain largely speculative and nebulous.
In the 1960s and 1970s, computer scientists began to split supercomputers into parallel components, and ARPANET, the precursor to the Internet, was in its infancy. Computers could now work individually to solve common problems. They could be distributed geographically to form long-range communication networks. Underlying this physical shift was a hugely important conceptual shift. We could reframe our thinking of a monolithic, complex, centrally controlled machine as a connected collection of distributed machines with no central control. Each of these machines could interact with its counterparts by following a simple set of rules, or a protocol. In technical parlance, we now had a distributed system that operated by carrying out a distributed algorithm.
In a distributed system (DS), we can think of causality as specified at a lower level than in a monolithic system. In the latter, we need some way to keep track of the state of every component in the system in order to determine what it should do next — we need a Laplace demon in our machine. However, in a DS, we can rid ourselves of the demon by specifying the individual interactions between components, and then ensuring that these interactions at a large scale will produce the properties we desire, such as constant connectivity or forward progress of messages between components. We may not be able to make guarantees about the actual state of the system as a whole, but we know that, by the design of the distributed algorithm, the whole must have certain properties. Just because we give up our perfect knowledge of the system does not mean that we give up our ability to make predictions about it! A good analogy for this is the study of gases in physics, where we have very little information about the individual gas atoms, but can still accurately predict their behavior at the macro scale from the known principles of their interactions at the micro scale. For a distributed system to achieve some goal, we must carefully specify the local interactions and then prove that they lead to global properties that achieve the desired goal.
In terms of understanding the world around us, this way of thinking may be just as transformative as the classical mechanism of Newton. But we might only just now be starting to see it.
In nature, distributed systems are everywhere: from the sea of cells in our body, to schools of fish and flocks of birds. Entities connected in complex networks from markets and economies to cities and nation states constantly interact according to distributed algorithms that, for the most part, remain mysterious.
Now the connections between computer science and biology are enjoying resurgence, and the results are incredibly striking (Navlakha, Bar-Joseph, 2014). We are discovering that in biology, much like in many large-scale distributed computing systems, the speed at which messages pass between entities is relatively unimportant. More significant is the degree to which the system can maintain its operation in the face of all kinds of failures. We are finding that, just like in computer networks, certain network structures in biology are more amenable to a particular distributed function. Other similarities abound, and we are only just getting started.
In fairly recent work at Microsoft Research, computer scientists and biologists collaborated to show that the Cell Cycle Switch in all eukaryotic cells (that means the cells inside you and me) is actually implemented by a biological computation of a well-known distributed algorithm called Approximate Majority (Cardelli, Csikász-Nagy, 2012). When a cell decides to die, replicate, or transition into some other state, that decision essentially derives from distributed computation. It is hard to express how astonishing it is that an optimal algorithm that was designed by humans for unreliable networks of computers was then discovered to already exist inside of us all. Other work has shown that schools of fish moving to avoid prey mirror algorithms that were designed to reach consensus among a group of distributed computers (Ioannou, et al., 2012). We can expect many more analogies with distributed computing to arise in the coming years, and we can expect that these analogies will allow us to discover new and exciting things about our world.
In the meantime, we can ask our own questions. What are we computing? How do our local interactions with each other affect the fabric of our society? As our world becomes more complex and intertwined, it may actually be healthier for us to renounce a monolithic view of it and instead design our interactions so that they produce the properties of a more peaceful, more understanding, and more distributed world.
REFERENCES
Lucretius. De Rerum Natura
Laplace, Pierre-Simon. A Philosophical Essay on Probabilities (New York: John Wiley & Sons, 1902).
Turing, Alan M. “The Chemical Basis of Morphogenesis,” Philosophical Transactions of The Royal Society B: Biological Sciences 237, no. 641 (1952): 37-72.
McCulloch, Warren S. and Pitts, Walter. “A Logical Calculus of the Ideas Immanent in Nervous Activity,” Bulletin of Mathematical Biophysics 5, no. 4 (1943): 115-133.
von Neumann, John. The Computer and The Brain. (New Haven/London: Yale University Press, 1958).
Navlakha, Saket and Bar-Joseph, Ziv. “Distributed Information Processing in Biological and Computational Systems,”Communications of the ACM 58, no. 1 (2014): 94-102.
Cardelli, Luca and Csikász-Nagy, Attila. “The Cell Cycle Switch Computes Approximate Majority,” Scientific Reports 2 (2012). DOI: 10.1038/srep00656.
Ioannou, C.C., et al. “Predatory Fish Select for Coordinated Collective Motion in Virtual Prey,” Science 337, no. 6099 (2012): 1212-1215 DOI: 10.1126/science.1218919.
Kyle C. Hale is a PhD student in Electrical Engineering and Computer Science at Northwestern University. His research is in experimental computer systems, high-performance computing, and computer architecture.