Release to refresh
Q&A w/ Prof. Paul Resnick's class at University of Michigan
Author of The Most Human Human, WSJ bestseller
Glad to be answering questions from your class with Professor Paul Resnick. I hope you have been enjoying The Most Human Human, and that it's been thought-provoking and stimulating to your own ideas, questions, and discussions. I'm eager to hear what's come up and what's been on your mind: what thoughts, concerns, and considerations have been spurred. Looking forward to it.
This Q&A took place between 10/17/15 and 10/23/15. Unanswered questions have been hidden
5 questions
What do you think is the one characteristic that will always be able to separate AI and humans?
Author of The Most Human Human, WSJ bestseller
First, it's hard to use words like "always" in this context.

I actually remember when the excerpt of Most Human Human was printed in the Atlantic, the cover said "Why Machines Will Never Beat the Human Mind," and I cringed, because that was a different claim than the much more nuanced one that I actually make in the book and article. The BBC once grilled me on live television why I thought machines would _never_ beat the human mind (emphasis on "never"), and it was hard to explain that this claim had actually been made by the graphic designer who did the Atlantic cover, not by me!

But back to your question, which is a very good one. In philosophy, there's an idea called "dualism" that comes most famously from Descartes; in effect it means that there are two wholly separate types of things in the world. Matter is one, and the "mind" or the "soul" is something entirely different. If you're a dualist, it's possible to believe that machines will literally never bridge the gap to human-level intelligence because they're "only" material.

I'm not a dualist: I think that the unbelievable physical complexity of the brain and body are the "whole story" when it comes to explaining how intelligence arises in human beings.

There's an intriguing book from 1989 by Roger Penrose called "The Emperor's New Mind," where he makes an argument for the idea that the brain is subject to quantum mechanics in a way that purely algorithmic systems (Turing machines, etc.) are not -- and from this he makes an argument that there is an unbridgeable chasm between brains and computers. I find this line of thought intriguing, but ultimately find myself incredulous either that quantum effects are so central to the human experience, or that they can't be incorporated into some form of computing. So again, I'm left concluding that there really isn't some fundamental barrier that prevents AI from coming to be.

But that, too, is a different question than what you're asking. You're asking what will always separate AI and humans, and for me it's very interesting to consider this question from the perspective of some future world in which legitimate, inarguable, human-level (or beyond) AI exists.

I think there is no denying that this intelligence would be of a very different type, or "flavor," than human intelligence -- just as human intelligence and dolphin intelligence, or human intelligence and octopus intelligence, are today. In fact the gap would probably be much wider than that.

For one thing, human intelligence operates overwhelmingly at very specific spatial and time scales. When we try to comprehend the age of the universe, for instance, we use metaphors like "If the universe were one year old, the human race would appear on December 31st" and things like that -- making analogies to the scales that are relevant to our bodies. Watch a time-lapse of a starfish or a plant and you have a completely different understanding of it. The same is true at an even bigger scale with geological phenomena.

Even more significant, in my view, than the impact of the human body, the human sensory system, and the human lifespan, is the simple fact that humans have a specific life history—we're _individuals_. When you and I see a movie and I ask you what you think, when I could just as easily read an eloquent essay by a film scholar, what's going on is that I'm not actually trying to learn about the movie. I'm trying to learn about _you_.

An AI that emerges as some kind of mesh of networked devices will present itself _very_ differently than one that emerges as disparate individual minds accumulating idiosyncratic life histories. My guess is the first will come much sooner than the second, and be much more "alien" as a result. Who knows, maybe in time we'll have both. Maybe the human race will become increasingly networked, to the point that spouses literally share sensory organs, for instance, seeing what each other see and hearing what each other hear. In that case, we end up meeting the "alien" AI mesh halfway, and the human experience is nothing like what it used to be.
Professor at University of Michigan School of Information
Do people assess others' morality or judgment on their high-entropy positions or on their high surprisal positions?

On p. 253 of your book you muse about diffs and morality. In class, we discussed this. Should we assess Hilary Clinton's judgment with hindsight on her votes as a Senator on 95-5 decisions or on 52-48 decisions? Perhaps taking a high-surprisal position (voting with the 5) should get the highest weight in our assessment, followed by high-entropy positions (52-48 votes), with least weight going to her positions on 95-5 decisions. Politicians seem to suggest this at times, but I'm not sure how much the public buys it. What do you think?
Author of The Most Human Human, WSJ bestseller
Very interesting question, and not one I've thought about before in that way.

The first thought that came to my mind was the famous "trolley" thought experiments, where all sorts of strange moral conclusions emerge: for instance, asymmetries in how we judge someone for the consequences of taking an action versus the consequences of _not_ taking an action, that kind of thing. I remember having a discussion of Dick Cheney's line that, "Ultimately, I am the guy who pulled the trigger, who fired the round that hit Harry." Do we let him off the hook more the more interceding stages of cause-and-effect he can wedge in between himself and the person he shot? ("Ultimately, I'm the guy who pulled the trigger that released the hammer that impacted the primer that ignited the propellant that expelled the bullet that...")

There is, of course, the bigger distinction in ethics between judging people based on their intentions and based on the consequences of their actions. This is certainly relevant at the ballot box: I would consider voting for someone for mayor who has sensible municipal-level views but a position about State or Federal power that I consider nuts—because becoming mayor wouldn't really enable them to affect that.

Re. Clinton, perhaps ultimately we're asking two different questions. One question is about her character, and the other is about what would happen if she were elected. For the latter, we look at the close votes and places where she tipped the scales: entropy. For the former, we look at the places where she broke from the herd: surprisal.

There's a deep question that this leads into, which is whether being moral and living by a strict moral code are, in fact, antithetical. Deciding at some point in one's life to live by an elaborate moral code which determines all of one's future decisions is, in some way of thinking, a turning off of one's moral system. I think there's a deeper point here which leads back to AI, which is that most computer systems are designed to evaluate a set of options with respect to a predetermined set of criteria (an "objective function"). Most high-level human reasoning, and certainly moral reasoning is in this category in my view, exists at a meta level: evaluating the relative importance of one criterion versus another, and whether to scrap the existing objective function and assert a new one in its place.
does the future of AI excite or worry you?
Author of The Most Human Human, WSJ bestseller
I think that relative to what I felt a few years ago when I wrote the book, I've probably become slightly more worried than I was then. There are definitely enormous upsides. Self-driving cars have the ability to all but erase the leading cause of preventable death in the world, for instance: that is world-changing. The coming of a peer-level (or better) intelligence alongside humans' own will provide us an incredible opportunity like none before to understand ourselves more deeply: for cognitive scientists, for psychologists, for philosophers, and beyond.

But any tool that leads to concentrations of power (and in this case also wealth) has the ability to cause huge problems. AI certainly falls into that category, as well as having other dangers all its own. Before AI gets to what's called the "existential risk" level—where it might intentionally or unintentionally exterminate the human race, let's say—there will be an earlier phase (of which we're just at the beginning) in which it will cause huge political and economic disruption. For instance, AI will likely force a total reconception of what the labor force is supposed to look like. What jobs and skills will survive? Many of the AI thinkers I know are also studying radical economic ideas in parallel, such as the idea of a guaranteed basic income. I'd feel more comfortable if I felt like we had a better grasp of what to do societally about either these short- or long-term challenges.
Do you see a future where humans can upload their conciousness into a computer?
Author of The Most Human Human, WSJ bestseller
In principle, I can imagine a future where—some orders of magnitude from now in terms of storage, processing, scanning, and so forth—it does become possible to reproduce a brain at the "connectome" level in a machine. All sorts of just unbelievably, incredibly strange things happen here, not the least of which is that any of these minds can effectively be copied ad infinitum. Why employ a physics department, when you can have a 1:1 Einstein-to-undergraduate ratio, plus another hundred thousand Einsteins publishing research? It just gets astoundingly weird. Now, what (if anything at all) it might "be like" to "be" one of those whole-brain-emulation virtual-machine minds is another question entirely.
Here is the M.Webster definition of A.I. - "a branch of computer science dealing with the simulation of intelligent behavior in computers. The capability of a machine to imitate intelligent human behavior."

Our approach to A.I. seems to be overwhelmingly human-centric, and understandably so - we are the only examples of true intelligence that we know of. Do you think that such a rigid definition of intelligence is a hindrance to A.I. research? Why must a computer act (and one day, function) like a human to be considered intelligent? Is acting like a human a stepping stone to something greater?
Author of The Most Human Human, WSJ bestseller
There's something syntactically intriguing for me about their definition, which is that it specifies that AI is about the _simulation_ of intelligent behavior. At some level, what differentiates "simulating" that behavior from simply doing it? Further lexicographic irony for me is that real, genuine AI is in fact precisely the point at which this faking it/making it boundary gets crossed.

It reminds me of the fact that in the early days of the printing press, print was in fact originally referred to as "A.S"—"artificialiter scribendi" in Italian—"artificial script." At some point, print was simply print: not a knockoff form of handwriting but its own thing (and now overwhelmingly the "realer" form of writing, if either can be said to be). So perhaps one day the "A" in "AI" will have to go; but that won't necessarily mean that the "print" and "script" of cognition have to be entirely the same and indistinguishable.

In many ways I think that the struggle to build intelligent machines in our image has been an incredible boon to our own understanding of ourselves. One of the clearest examples is that the field set out trying to encode the way the conscious mind works, which led to the kinds of "if this, then that"-type programs that dominated computer science for many decades. This sort of "Good, Old-Fashioned AI," as it's called, pretty quickly came up against some fundamental limits. One can't really _articulate_ why or how one judges a blurry photograph to be a cat rather than a dog. In fact there's an enormous class of things that happen without the kind of logical, deductive, step-by-step reasoning process—language acquisition and use, cause-and-effect inference, almost everything involving the senses—and as it turns out this is where a lot of the true complexity of brains (not just human) reside.

In fact, I think it's one of the greatest and most praiseworthy revelations of the failure of GOFAI (and subsequent interest in neural networks, etc., and approaches modeled more on the brain than on the mind, per se) that it's shown that many of the very most complex and sophisticated things minds do are not, in fact, the things we typically thought of as "intelligent" behavior—deriving theorems, playing high-level chess, etc. They are in precisely the things that four-year-olds do. In this way, the project of AI has, in some ways more than anything, vindicated the cognitive complexity of babies and animals.

So, if anything, we have a radically less rigid view of intelligence than when we started working on AI. That, to me, is a big deal.

As to the question of why a computer must act like a human to be considered intelligent—well, I think the answer has a lot to do with who it is that's doing the considering!