Had we talked three or four years ago, I’d have told you that there were some tasks computers were exceptionally good at completing, like calculations or repetitively crunching scenarios.
I’d have also told you, though, that there were some things they weren’t very good at doing, and probably never would be, simply because of the amount of variables involved, such as driving a car, reading emotions, or pretending to be human.
On some level, that changed this week.
AlphaGo, a product of Google’s DeepMind artificial intelligence (AI) lab, beat a 9-dan professional player (think chess grandmaster), Lee Sedol, at a 5-game match of Go, sweeping the first three games on the way to a 4–1 final.
That doesn’t seem especially significant at first. After all, computers have long ago proven they can beat humans at most games, including intricate games like chess. Go, however, represents an entirely different level of complexity. Such is the complexity (there are 2.08168199382×10170 possible board positions) that even human grandmasters of Go will describe their playing process as part analytics and strategy and part pure human intuition. Computers simply aren’t built to intuit anything, much less something with the complexity of Go.
Go is simply the most recent milestone of many in the last year or so that have fallen to artificially intelligent computer programs. From self-driving cars to AIs that have passed the Turing Test and successfully fooled someone into thinking they were a human being, things previously thought impossible are suddenly being tested in real-world scenarios. Why the rapid burst of progress? There are a couple of reasons.
First, Moore’s Law still holds true. The short version: The amount of processing power you can buy for a dollar doubles every 18 months or so. First published in 1965, it’s held true for the most part ever since. That’s not such a big deal early in the process, because the amount of processing power is small and doubling it doesn’t really matter much. But when you keep doubling every 18 months, doubling and doubling again, the growth is quite literally exponential. You reach a tipping point where things that were previously thought impossible suddenly become thinkable, then doable, in a very short time span.
We’re reaching a tipping point in processing power. It means that things that previously only the human brain could process can now be done faster and more efficiently by a computer. It’s not just because technology is advancing. It’s because the rate at which it is advancing is accelerating.
Second, researchers have stopped trying to teach AIs how to think like a human (or, more properly, like they think a human thinks) and instead have begun teaching them how to learn. Machine-learning algorithms work by providing an AI framework with a basic set of assumptions and capabilities, then giving it a goal. The system then processes through scenario after scenario to optimize toward the goal, eventually “teaching” itself through trial and error what the most effective strategies are.
There’s enormous potential for machine-learning algorithms to revolutionize a number of problems that have evaded the capability of the human mind. The DeepMind team, for example, want to next turn their attention from game playing to helping sort out the issues in the UK’s National Health Service. There’s potential for a great deal of progress to be made on some very difficult issues.
On the more pragmatic side, we’re already seeing advertisements that change their content based on who’s viewing them and hotels staffed almost entirely by robots. As these systems gain the ability to recognize individuals and even to recognize emotional responses based on facial and body language, computers and robots will begin to seem more and more human. As Dr. Bob Weise is fond of saying, “The gap between science fiction and science fact continues to narrow.”
First, get comfortable with change. The next fifteen to twenty years are going to see revolutions in technology, social systems, and medicine that will surpass the advances of the last 50–60 years in the blink of an eye. With those advances, though, are going to come some tough questions, and ones we need to begin laying the theological foundation for today. These are questions already being asked by philosophers and artists, and the church needs to get ahead of the conversation now, while there’s still time to consider it.
Second, start teaching now. Because the questions are coming (and we know it), we can prepare ourselves and our congregations to be able to give a correct answer to the harder questions. We’ll look at a number of them over the next several months, but for now, let’s start with the question with which the AlphaGo brings us face-to-face:
Theologically, this one isn’t hard for us. Being human is about, simply put, being human. We are the ones God created with His own hands, the ones He placed a little lower than the angels, the form He chose to take on to suffer, die, and rise, and the ones for whom He did it. We are the crowning jewel of all creation, and the cause of its fall into sin. We are human, and there is nothing in the universe that can match us.
But if an AI can simulate human emotion, be programmed to have a personality, and appear in every way to be a human being, does that mean it’s alive? If it can think faster and more efficiently than a human, does it make it a human? Because something does the things a living thing does, is it therefore alive? Game developer Quantic Dream is already asking that question in their forthcoming title Detroit: Become Human.
These questions are already part of our culture. Now is the time to begin clearly preaching and teaching what the Scriptures and the Confessions say about humanity so that when these questions move from the movie or video game screen to the real world, we’ve already thought through the answers.
Want to receive notifications about more content like this? Subscribe to this blog, Technology & Your Ministry.