A MISCELLANY
The week of December 6, 2015

The master algorithm is going to change life as we know it

By Jesse Hicks

It’s safe to say that most of us probably don’t spend a lot of time thinking about algorithms. We go to Amazon and notice we have new recommendations based on our previous purchases; we see that Netflix has carefully selected movies based on our past preferences. But we don’t necessarily think about what that means, or what’s going on behind the scenes. We probably think most about algorithms when they go awry, whether that’s yet another Facebook faux pas, a stomach-dropping flash crash on Wall Street, or a online shop selling tasteless T-shirts generated entirely by computers.

It’s in those moments that we’re reminded just how much of the world runs on algorithms: the sets of rules, increasingly byzantine and incomprehensible to humans, that govern all the computers around us. We’re reminded just how vulnerable we are when algorithms go bad (obligatory Skynet reference here); we’re reminded that their mistakes are not those humans make because, of course, algorithms are not human.

Pedro Domingos has spent a lot of time thinking about algorithms. His new book, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World, is an introduction to that world—and a report on the state of the art. He believes that we live in an age of algorithms, and sees a time when we may see a time when they radically remake our world—even more than they already have.

Via email, we discussed the difference between human and computer “thinking” and learning, how much of our lives are already influenced by algorithms, and what happens when the machines finally learn how to learn everything.

You say we live in the age of algorithms. How so, and why might we be unaware of just how many algorithms are working for—and possibly against—us every day? How is machine learning working behind the scenes in ways we don’t necessarily realize?

Everything computers do, they do using algorithms. Your cellphone, your laptop, your car, your house, and your appliances are all full of algorithms. But they’re hidden: You see the shiny gadget, but not what’s going on inside. Siri uses an algorithm to understand what you say, Yelp uses an algorithm to select restaurants for you, your car’s GPS system uses an algorithm to find the best route there, and the credit card reader uses an algorithm to take your payment. Companies use algorithms to select job applicants, mutual funds use them to trade stocks, and the NSA uses them to flag suspicious phone calls.

The difference between “regular” algorithms and learning algorithms is that the former have to be manually programmed by software engineers, explaining step by step what the computer needs to do, while the latter figure it out on their own by looking at data: Here’s the input, here’s the desired output, how do I turn one into the other? And what’s remarkable is that the same learning algorithm can learn to do an unlimited number of different things—from playing chess to medical diagnosis—just by being given the appropriate data.

What is the “master algorithm” of the title, and how is it different from, say, Ray Kurzweil’s singularity? What are some of the potential advances that the master algorithm could bring?

The master algorithm is an algorithm that is capable of learning anything from data. Give it data about the planets’ motions, inclined planes, and pendulums, and it discovers Newton’s laws. Give it DNA crystallography data and it discovers the double helix. From all the data in your smartphone, it learns to predict what you’re going to do next and how to help you. Perhaps it can even discover a cure for cancer by learning from a massive database of cancer patient records.

Give the master algorithm data about the planets’ motions, inclined planes, and pendulums, and it discovers Newton’s laws.

Other advances it could bring are: home robots; replacing the World Wide Web [with] a World Wide Brain that can answer your questions instead of just showing you Web pages; and a 360-degree recommendation system that knows you as well as your best friend and recommends not just books and movies but dates, jobs, houses, vacations—everything in your life.

Kurzweil’s singularity is the point at which artificial intelligence exceeds the human variety on Earth, and therefore becomes incomprehensible to us. Or, more precisely, that’s the “event horizon” of the singularity, like the event horizon of a black hole is the point beyond which even light cannot escape. Without the master algorithm, we won’t reach the singularity anytime soon. With it, AI will certainly accelerate, but I think we will still understand lots about the world, because the AIs will be working to serve us, by design. We may not understand how they produced what they give us, but we will understand what those products do for us, or we wouldn’t want them. Besides, the world has always been partly beyond our understanding. The difference is that now it’s partly designed by us, which is surely an improvement.

You describe the field as currently divided into “tribes,” each with a different approach to machine learning that can solve certain kinds of problems better than others, but with no tribe possessing the algorithm that can subsume all others—basically a machine-learning process that would let us answer all answerable questions. You compare that assumed master algorithm to the Standard Model of particle physics, or the central dogma of molecular biology: “a unified theory that makes sense of everything we know to date, and lays the foundation for decades or centuries of future progress.” That sounds like a big claim. What makes a master algorithm seem plausible, and what’s keeping our disconnected “tribes” from creating it?

Even some of the simplest learning algorithms have a mathematical proof that they can learn anything given enough data. So in that sense there’s no doubt that the master algorithm exists, and indeed some researchers in each of the tribes believe that they’ve already found it. But the catch is that the algorithm has to be able to learn what you want it to using realistic amounts of data and computation. Here we turn to the empirical evidence: Nature provides us with at least two instances of an algorithm that can learn anything (or almost), namely evolution and the brain. So we know that the master algorithm exists; the question is whether we can figure out what it is precisely and completely enough to write it down, in the same way that physicists write down the laws of physics as equations (which are themselves a kind of algorithm).

Nature provides us with at least two instances of an algorithm that can learn anything (or almost), namely evolution and the brain.

Unfortunately, the five tribes of machine learning are like the blind men and the elephant: One feels the trunk and thinks it’s a snake, another leans against the leg and thinks it’s a tree, yet another touches the tusk and thinks it’s a bull. What we really need is to take a step back and see the whole picture and how the pieces fit together. Ironically, this might be easier for someone who’s not already in the field and thinking along the previously laid tracks of the five tribes.

One of the epigraphs opening your book comes from Alfred North Whitehead: “Civilization advances by extending the number of important operations we can perform without thinking about them.” Whether one agrees with this sweeping claim, it seems to evoke something important about our notion of “thinking” and its ties to civilization and humanness. We tend to think of “thinking” as a uniquely human activity, perhaps even a defining one. Someone like Nicholas Carr, for example, cautions against outsourcing our thinking, which in his mind diminishes our humanity—the worry being that a lack of thinking makes us somehow more robotic, in a very broad sense. At the same time, we worry about “thinking” machines: You address Skynet and other apocalypse-bringing artificial intelligences, which persist as fictional bogeymen ready to wipe us out if they become too powerful. Should we consider computers already capable of “thinking”? Or is that a uniquely human activity—and if so, how will that line between a human thinker and a machine learner be drawn in the future?

Edsger Dijkstra, a famous computer scientist, said that the question of whether a computer can think makes about as much sense as the question of whether a submarine can swim. Definitions aside, the important point is that computers can solve problems that humans solve by thinking—and the range of those problems keeps expanding. With machine learning, computers can even solve problems that we don’t know how to program them to solve—they figure it out on their own. So the dividing line is very fuzzy, and it keeps shifting.

I disagree with Nicholas Carr that outsourcing some of our thinking diminishes us—on the contrary, it augments us, because it allows us to focus our thinking on better things. That, I think, is Whitehead’s point. Socrates didn’t like writing, because it allowed people to forget things. Luckily for him, Plato wrote down his thoughts for him, and that’s why humanity remembers them to this day. Writing augments our memory, and Google augments it even more. Far from making us stupider, it makes us smarter.

Toward the end of the book you write, “People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.” Can you say a little bit more about what you mean by that?

Prominent people like Stephen Hawking and Elon Musk have raised the alarm about artificial intelligence, calling it an existential threat to humanity. But the “Skynet scenario” of an evil AI taking over the world is pretty far-fetched; I don’t know too many AI experts who take it seriously. The problem is that people confuse being intelligent with being human. In Hollywood movies the AIs and robots are always humans in disguise, but in reality they’re quite different. Computers don’t have a will of their own, emotions, or consciousness. They’re just extensions of us. As long as all they do is solve the problems that we set [for] them and we set the bounds and check the solutions, computers can be infinitely intelligent while being of no danger to us.

The problem is that people confuse being intelligent with being human.

That doesn’t mean there’s nothing to worry about. Like any other technology, humans might use AI for evil purposes. But most of all, AIs could cause harm by giving us what we literally asked for instead of what we really wanted: the sorcerer’s apprentice problem. Computers already make all sorts of important decisions in the world today—who gets a job, who gets credit, who is flagged as a potential terrorist. And they often make mistakes, because they have no common sense. But the cure for that is to make them smarter, not stupider. So it’s not having too much AI we should worry about, but having too little.

What are the most important things for us to pay attention to as machine learning progresses?

We—everyone—need to take control of the machine learning algorithms that surround us. Otherwise they’ll serve the organizations that built them, not us. It’s like driving a car: You need to know what the steering wheel and pedals are, and what to do with them. If a cab driver said to you “I think I know where you want to go; I’ll just take you there,” you’d probably get out in a hurry. But that’s what happens with learning algorithms today. They have control knobs, but they’re hidden. You should be able to, for example, tell Amazon’s recommender system what you want it to do for you, ask it to justify its choices, explain what it did wrong, etc. The more widespread machine learning becomes, the more important this is.

Not to put you in the unenviable position of making falsifiable predictions, but: What kind of advances do we need to make to create the master algorithm?

Many people think that we already have the main ideas we need to create the master algorithm; it’s just a matter of figuring out how to combine them. And we have indeed made much progress in this direction; we’re not far from succeeding, in fact. But my feeling is that we are still missing some major ideas, and someone needs to come up with them. I have some candidates that I’m working on, but I’m just one person. So one of my goals in writing the book was to open it up to everyone else. My secret hope is that a kid somewhere—the Newton of AI—will read the book, start thinking about machine learning, and have the lightbulb moment from which all else will follow.

Illustration by Max Fleishman