Leading thinker on artificial intelligence and Pulitzer Prize-winning author John Markoff ’71 speaks with Associate Professor of Computer Science professor Janet Davis about the pursuit of perfect AI, and whether robots will soon be assembling our Ikea furniture.

Janet Davis: Your new book is Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots. Where did that title come from?

John Markoff ’71: The title starts at Whitman. My friend David Current ’71 introduced me to Richard Brautigan. Our generation read poetry like “Confederate General in Big Sur” and “Trout Fishing in America.” And later, I discovered in the ’80s, when the personal computer industry grew up, Brautigan had another poem called “All Watched Over by Machines of Loving Grace,” and I always loved it, because it has a note of irony. I took a part of the poem and used it for the flyleaf for the book. I had another esoteric title, and my editor, to her incredible credit, said, “That’s obviously your title.” As machines increasingly wrap around us and take human form, it’s something to think about. What’s the relationship going to be like? I was really influenced by a conversation I had with [computer scientist and UCLA professor] Alan Kay. I was talking with him about the issue of humans interacting with machines. Will they be slaves to us, will they be masters, or will they be partners? And his point was that it’s a design question.

Rise of the RobotsJD: Machines of Loving Grace really evokes an image of computers or robots caring for human beings. Early in your book, you write a little bit about the coming labor crisis in elder care as the world population ages, and I was wondering if you could talk more about the role robots might play.

JM: I’ve really changed my view on this, and changed it even more since I finished my book. I began writing about a new wave of AI technologies that were having impact for the first time on the upper reaches of skilled white-collar workers. I was noticing that $35-per-hour paralegals and $400-per-hour attorneys were displaced by programs that could do a demonstrably better job of reading documents than humans. It was eye-opening. And then you start to look at the medical profession, and you can see all kinds of interesting things happening—from IBM’s supercomputer-turned-medical-diagnostic tool Watson to radiology—where pattern recognition programs are doing a better and better job. Moshe Vardi, a computer scientist at Rice University, is one of the technologists who’s articulated the possibility that machines will be able to do everything that humans are able to do by 2045. He hedges on whether that means complete unemployment for the entire population, but he raises it as a coming crisis. I started in that camp, and then there were a couple things that moved me in a really different direction.

JD: What were the things that changed your mind?

JM: One was a conversation I had with Danny Kahneman, the Stanford economist. He pointed out that, because of the one-child policy, there is going to be this period when the Chinese workforce is going to contract, and, in fact, there may not be enough workers to do the job, and they’ll perhaps rely on robotics. All over the advanced world, populations are aging. In Europe, the population is aging much faster than the United States. They’ve got a billion-euro project to develop an elder care robot, which is an interesting social commitment. By 2020, for the first time in human history, there will be more people over 65 than are under five in the world. I’ve begun to think that’s much more important than people realize.

JD: In terms of cognition versus perception, what are the limits of automation over the next two decades?

JM: Machines are starting to see, machines are starting to listen – that’s going to transform the workforce. But where you have a diversity of unstructured tasks, I think that the machines are going to be much slower in coming to take over those tasks.

JD: And that comes back to that dichotomy between perception and cognition. Where we’ve been making great strides in perceptual tasks, like understanding images, that’s a long way from being able to formulate a plan or use common-sense reasoning.

“As machines increasingly wrap around us and take human form, it’s something to think about. Will they be slaves to us, will they be masters, or will they be partners?”

JM: Yes. A ground truth for me is the DARPA Robotics Challenge finals recently completed in Los Angeles. Twenty-four groups of researchers, who are among the best roboticists in the world, were given almost two years, with millions of dollars, to build a machine to do eight simple tasks, and most of them failed. The machines could walk, they could stabilize themselves in uneven environments, but there was very little high-level autonomy. We’re so far away from that. To your point, put a robot in the living room and ask it to clean up. When is that going to happen?

JD: You said earlier there’s no way you’d let one of these robots near your grandmother.

JM: Yeah, that’s the test to me. There’s this effort afoot right now in the field to get beyond the Turing test (Alan Turing’s test in which, to pass, a machine’s behavior must be indistinguishable from a human’s.) There are a bunch of competing ideas about how you measure intelligence. [New York University psychology professor] Gary Marcus is proposing what he calls the Ikea test, and that is, you give the computer a bag of parts, and it has to assemble the furniture.

JD: And that requires not only perception, but a tremendous amount of common-sense knowledge. And, really, embodied reasoning, because Ikea diagrams depend a lot on your ability to understand the human body and how it relates to things in the world.

JM: So that’s a great test, right? And it’s a test for some humans, as well.

JD: The test of a marriage sometimes, even! You write about a computer scientist named Jerry Kaplan, and in particular you talk about how his career evolved.

JM: Jerry is a close friend, and he’s someone whom I’ve known through his entire career. He’s written this book called Humans Need not Apply. He’s teaching a course on the history and philosophy of AI at Stanford now. [In a discussion we held recently at Kepler’s Books in Menlo Park,] one of the people in the audience said, “Here we have this new world—how should people prepare themselves?” And Jerry’s point was, well, they should start with a good liberal arts education.

JD: Well, good!

JM: The liberal arts education for this world is more necessary than ever. We’re in this world of a gig economy. I’ve been in a single profession my entire life. Increasingly people are going to move from …

JD: Gig to gig to gig …

JM: Yeah. In talking about Jerry and his background, I was reflecting his perspective as much as my own. He basically got out of school with a liberal arts degree and began to look around for what he wanted to do in life. He’d seen 2001: A Space Odyssey and he said, “I want to build H.A.L.” And that’s what led him to decide to go on to study natural language AI, and that’s what he got his Ph.D. in.

JD:  It sounds that you’re arguing that the humanities really has a role in envisioning the future.

JM: Yes, very much so. Having a liberal arts education helped me, too. I had a social science background, and I was always asking social science questions. “How is this technology going to affect society?” was the reason I went in that direction, and I made a whole career out of it.

JD: That’s interesting, because the question, “How is technology going to affect society?” is also the question that motivates me, but also coming at it from the perspective of design.

JM: That’s right. Basically teaching people how to design the future. I mean, that’s what you’re doing in a way, right?

JD: I hope so.

JM: I’m jealous. Late in the book I write about this talk that Ron Arkin gave at Humanoids 2013. He’s an ethicist and roboticist at Georgia Tech. He gave a talk to 200 of the best humanoid roboticists in the world, and the title of his talk was “How to NOT Build a Terminator.” His message then was you have to think about it as designers. You’re not operating in a vacuum; this should be part of the design paradigm.

JD: A social determinist would say, “Well, it doesn’t’ matter. You build the technology the way you build it, and bad people will use it to bad ends.” But as designers, you really do have an influence over how that technology will be used.

JM: Too often in Silicon Valley there’s this sort of crank-turning mentality, that we’re just cogs in a wheel, and I really don’t think this is acceptable as a design philosophy at this point.

JD: It’s not acceptable to think that the future is inevitable.

JM: Exactly.

—David Brauhn

John Markoff​ ​'71, a Whitman sociology major, is now a ​reporter for The New York Times​ and the author of several books about technology, hacking and computing. He was one of a team of reporters who won the 2013 Pulitzer Prize for Explanatory Reporting for a series of 10 articles on ​tech industry business practices