Rebooting AI: Building Artificial Intelligence We Can Trust
By Gary Marcus and Ernest Davis.
Pantheon Books, 2019.
Hardcover, 288 pages, $29.

Reviewed by Nicholas Meverel

Some years ago, without fanfare, the phrase “artificial intelligence” began to refer no longer to something that our descendants might possibly enjoy or put up with but instead to something already in our midst. This shift in speech suggested a sea change in technology—if real, one of the greatest ever accomplished, the fabrication of minds by human hands.

The appearance of “artificial intelligence” should have been as pivotal as the industrial revolution, so why did it have the feeling of anticlimax? Some smelled a rat, but they lacked the technical knowledge to second-guess the press-release pizzazz. Now come Gary Marcus, machine-learning entrepreneur, and Ernest Davis, computer science professor, with a clear and brilliantly argued book, Rebooting AI: Building Artificial Intelligence We Can Trust, showing that the algorithms passed off today as “intelligent” are in no way that.

Consider this passage from Farmer Boy by Laura Ingalls Wilder. The young Almanzo finds a pocketbook (wallet) on the street and approaches a Mr. Thompson, thinking it might belong to him.

Almanzo turned to Mr. Thompson and asked, “Did you lose a pocketbook?”

Mr. Thompson jumped. He slapped a hand to his pocket, and fairly shouted. “Yes, I have!”

As Marcus and Davis note, a decent reading algorithm should be able to answer the questions, “Why did Mr. Thompson slap his pocket with his hand?” and “Before Almanzo spoke, did Mr. Thompson realize that he had lost his wallet?” But these questions, easy for many children, are beyond the grasp of algorithms today—and not only today, but tomorrow, and tomorrow, and tomorrow. The reason for this is not that the algorithms have a poor understanding of the passage; it’s that they have no understanding at all of any words.

Rather than understand words, algorithms sift massive amounts of data to find statistical correlations between one arrangement of letters with another arrangement of letters. Google Translate, for example, relies on “bitexts,” such as the proceedings of the Canadian parliament printed in both English and French; it and similar algorithms are parasitic on mountains of verbal data generated by human beings, without which they would be helpless. In no way does Google Translate understand the letter-arrangements it takes in or spits out. When we say that a machine “knows” or “wants” something, this is for our convenience of expression, but what an expensive convenience it has turned out to be, purchased at the cost of widespread misunderstanding of what machines can do.

The authors note that in understanding the Farmer Boy passage, human beings rely not on statistical correlations but on certain implicit assumptions: “People can drop things without realizing it.” “People often carry their wallets in their pockets.” “You can often find out whether something is inside your pocket by feeling the outside of the pocket.” These are only three assumptions, and even these three in turn rely on potentially thousands of subsidiary assumptions, such as the relations between objects in space, the desirability of money, the limits of perception, and so on. The way seems clear then: program algorithms to know these many millions of ideas.

But this project will founder on the fact that algorithms can’t understand general statements either. Algorithms deal in the numerical, fundamentally in relations of identity or non-identity. But vagueness, generality, and metaphor are ingredients to human thought—not only in poetry but in all practical endeavors. Much of our thought turns on acts of naming that intelligent people can disagree about. Something happens to me in my life: is it proper to call it a “revolution,” a “peevish remark,” a “birthday party”? If I buy you a martini on your birthday, is that a “birthday party”? If we disagree about whether I named something correctly, I can’t force your assent in the way that I can force you to accept a mathematical theorem.

The authors ask as an example whether the French resistance or the Soviet invasion of Finland were part of World War II. The war has no definite bounds like, say, a barrel, but no one denies its reality. Trained by mechanism, contemporary thought has a hard time understanding that a being can be both real and also vague around the edges. But the Atlantic Ocean is real, although vague in brackish inlets. Reality is not a collection of perfectly bounded things that we can exhaust through quantitative measurements. As a result, much of the daily work in data science is using human judgment to affix labels to things, a task no machine can do.

And words are only rarely like rigid diagrams of definite physical states. What, for example, does “peace” refer to in the physical world? We could say “lack of turbulence,” but windy grass can seem peaceful, more so than still buildings. What would our machines measure to detect “peace”? Or even, “your day,” as in “How’s your day going?” “It’s going great.”—how to program “great,” which is to say, how to make the word quantitative? How could you even begin? (A list, potentially infinite, of heartening events? But people react differently to the same event.) “It’s going great” isn’t exactly Gerard Manley Hopkins; this is sane, daily speech, but algorithms can’t even touch it. The world of meanings hovers over the physical world, illuminating it without being explained by it. And the simple fact that intelligent people disagree on important acts of naming suggests that no one can program intelligence, since programming as such has the aim of generating a consistent output from a given input.

Fine—algorithms may not be able to converse with us, but let’s be honest: what we really want is just a robot maid. The lure of artificial intelligence seems to be that machines will do the boring thinking while we human beings address the interesting questions. But there can be no sharp division between these two realms. Even drudgery needs thought that is basically linguistic and not quantitative.

Running a brush back and forth across a tile seems programmable enough. But what about the task of “picking up a mess”? What’s “a mess”? The question can’t be settled by quantitative measurement. Understanding a “mess” presupposes an implicit understanding of a “well-ordered room,” which in turn presupposes an implicit understanding of all the purposes to which a person might put the room—in short, we end up needing the robot maid to have an implicit and general understanding of all of human life, which was a bit more than we bargained for. “Mindless work” needs much more mind than we are tempted to think.

“Generalized self-driving [cars] is a hard problem,” tweeted Elon Musk recently, “as it requires solving a large part of real-world AI. Didn’t expect it to be so hard, but the difficulty is obvious in retrospect.” You don’t say. By necessity, Marcus and Davis note, today robotics firms focus on discrete tasks like picking apples. Any intelligent creature, they say, “needs to compute five basic things: where it is, what is happening in the world around it, what it should do right now, how it should implement its plan, and what it should plan to do over the longer term in order to achieve the goals it has been given.” All five of these rely on background contexts that spread without discernible end. Any rigid formulation of these criteria would fall short of reality: real human thought is flexible and porous.

This review has only skimmed the wealth of this slim and lucid book: the critiques of deep learning and robotics are not to be missed. So far, as well, this review may have given the impression that Marcus and Davis aim to show the impossibility of artificial intelligence. In fact, they hope for its realization, and they criticize the state of the art as concerned friends. But their hopes seem to rest on certain doubtful assumptions, above all on the notion that we perceive the world by building a model of it, a model made out of explicit facts. To understand the poverty of this idea, the book to read is What Computers Still Can’t Do by Hubert Dreyfus (1929–2017), who used the thought of Martin Heidegger (1889–1976) and Maurice Merleau-Ponty (1908–1961) to explain the failures of AI in his day. It remains unrefuted.

The artificial intelligence project stands or falls on the notion that thought itself consists only of explicit rules applied to quantifiable data. If it does not, then we can never engineer intelligence, since what we mean by engineering is precisely this putting together of rigid parts into a rigid structure. The danger, then, is not thinking machines but human beings thinking like machines. That danger grows every year, and no one has any idea how to stop it. In the meantime, we can offer some resistance by refusing to call algorithms “artificial intelligence.” As Confucius taught us, the first task in repairing society is to call things by their right names. 


Nicholas Meverel is a writer living in Washington, DC.

Support The University Bookman

The Bookman is provided free of charge and without ads to all readers. Would you please consider supporting the work of the Bookman with a gift of $5? Contributions of any amount are needed and appreciated!