I am standing in the dark, looking at the shadows on the walls of this ancient cave when my pocket buzzes — Facebook wants to notify me that I’ve been invited to a play tonight. Who am I? You might say I am Plato, except, like, I also have a smartphone. Editor Stacy Hale had a great conversation with Bryan A. Knowles about AI and giving meaning to language.
Greek philosophy, among some silly things, asks the real questions, like: “What is the true nature of the universe?” This question might seem impossible to answer simply because it is impossibly vague, but in Plato’s famous Allegory of the Cave, it is explained that we can never even look upon the face of Truth — our ability to know anything is intrinsically limited by our imprecise senses, psychological processes, and non-universal language.
BK: Keep in mind that this train of thought is carried on the assumption that one true Truth exists, something that Relativity and the Uncertainty Principle and the Uniqueness Problem all contradict. Regardless, Plato’s Cave remarkably sums up much of modern scientific skepticism: There are things we can never know, such as whether any given computer code will halt when executed; there are things we can never approximate, since all it takes is a single electron counted wrong and our weather predictions will quickly be way off; and, there are things we can never expect, such as the patterns on a rose when all we are looking at is the inner workings of the plant cells that make it up. From sociolinguistics to business analysis, the world is a lot less certain than anyone could ever be prepared for — you too, Mr. Statistician. It’s much easier to shrug and say, “I don’t know,” then pull out your smartphone and ask, “Siri, what is the meaning of life?”
Even if someone, some very lucky person, could see the true nature of the universe, he or she could never transplant that knowledge into the minds of others because language is not shared, but is created by the community that speaks it. Simply put, the smartest English-speaking genius in the world speaks a slightly different English than the one I do, and it is this slight difference that leads to an inability for me to hear exactly what that genius has to tell me.
SH: So here’s this paradox: No one can speak for the network itself, unless we all do. When it comes to humanity and our creations, we seem to need to step back from our own mind and model it in order to understand it. But in doing so we just create more models, and they take on a life of their own because their creator had an incomplete perspective.
BK: Because we all occupy a uniquely subjective position in the network. Because we still fail, in some amount of precision, to dialog with the collective commons perfectly. Because how can we even know how our own minds work — something that would make Freud proud — without dialoging with it?
Since we can’t understand each other perfectly, why would our phones be any different? Anything you text me and anything Google’s AI predicts for me both have to be read by the same, imprecise language cortex. So, is my smartphone trapped in the cave with me?
“Do our new, tech-enabled conversations provide us better glimpses into reality?”
BK: If my phone and Google’s servers and the routers and network switches and NSA intercepts along the way — all of it — are running machine code, and machine code can be, with enough time, carried out by a human with nothing better to do, and there are answers that humans can never answer, it must be true that Google, or any artificial intelligence, can never answer those same questions either.
SH: There are more questions though! The ones we never could, but someday might answer: Has knowing what Bob across the globe had for breakfast this morning changed the fabric of knowledge itself? Do our new, tech-enabled conversations provide us better glimpses into reality? Or a tighter grip on language? Aren’t we better off than the Greek masters? Interactions between plant cells can produce fascinating patterns. Doesn’t that mean all our new and networked interactions on the web have a chance to produce answers to age-old, once-difficult, questions? Just how smart is a global community of humans?
BK: Consider how as soon as we create any new artificial tool, like AI, we quickly integrate it into our lives. That creation then, in time, becomes a defining part of who we are before we finish contemplating our original position within Plato’s cave.
SH: Okay you’re right. Let’s try again. I think your question was: Are we really any better off than the Greeks, even wearing the laurels of all our technological advancements in the last century? And I would add to that: Can understanding network behavior bring us closer to answering that question, “Who am I?” If you could see anything, at any scale, as a program — ideas, traditions, public policies, national beliefs, self-identities — perhaps as machine learning algorithms, then AI is teaching us that this model is a very powerful idea. It implies that all of these “programs” can be rewritten. So, I am hackable. You are hackable. Everything is hackable.
BK: Who’s doing the hacking? God? Nature? Nature is a complex system, both gaining order and breaking down all the time due to self-organized criticality. Only now are we bearing witness to it from such a perspective, as our language for capturing it evolves, and the scientific community — then in turn the general public — propagates this whole philosophical drama in our networked, collective processes. In this way, we are inseparable from nature — we oscillate between thinking we can control Truth and admitting that we never will. All the while, we construct artifacts as experiments to model and understand how nature works. This understanding builds up, then breaks down|RD as soon as we think we’ve got a set of laws we can fully depend on.
“Nature is a complex system, both gaining order and breaking down all the time due to self-organized criticality.”
“To obtain the truth in life, we must discard all the ideas we were taught and reconstruct the entire system of our knowledge.”
— Rene Descartes
SH: Nature always incorporates multiple perspectives. Opposing forces and points of view are what keep the system alive. Remember, Descartes defined knowledge in terms of its opposite, doubt. Coincidencia oppositorum. We can’t keep being afraid of being wrong, of not having the right data. We have to unlearn that.
BK: When everyone’s on the same page, we risk falling into fragility, just asking for Nature to respond with the one thing that will undo us. Absolute consensus is, systemically speaking, a bad choice. What we really need is community as Nature designs it. At any scale, between cells, two people, or an international forum, diversity and dialog are essential for whatever adaptations we’ll need to make to survive.
SH: Instead, I’m afraid, we want AI to give us all the answers, although, ultimately, we’re afraid of what it will do. We’re ashamed of our own ignorance (ironically, the very thing that inspires us to reach out and cultivate collective intelligence). That’s our algorithm. So maybe that’s why we created AI. We wanted a technological failsafe in case we became socially fragmented and unable to process, on our own, the masses of information that are now available to us. And sometimes these technologies themselves become a self-fulfilling prophecy. An echo chamber. Like chatbots. What are we really afraid of? Being alone?
Can I “Okay, Google” my way out of this?
BK: What should we do then?
SH: Keep connecting I guess.