Isaac Ilyashov
Here is the M.Webster definition of A.I. - "a branch of computer science dealing with the simulation of intelligent behavior in computers. The capability of a machine to imitate intelligent human behavior."

Our approach to A.I. seems to be overwhelmingly human-centric, and understandably so - we are the only examples of true intelligence that we know of. Do you think that such a rigid definition of intelligence is a hindrance to A.I. research? Why must a computer act (and one day, function) like a human to be considered intelligent? Is acting like a human a stepping stone to something greater?
There's something syntactically intriguing for me about their definition, which is that it specifies that AI is about the _simulation_ of intelligent behavior. At some level, what differentiates "simulating" that behavior from simply doing it? Further lexicographic irony for me is that real, genuine AI is in fact precisely the point at which this faking it/making it boundary gets crossed.

It reminds me of the fact that in the early days of the printing press, print was in fact originally referred to as "A.S"—"artificialiter scribendi" in Italian—"artificial script." At some point, print was simply print: not a knockoff form of handwriting but its own thing (and now overwhelmingly the "realer" form of writing, if either can be said to be). So perhaps one day the "A" in "AI" will have to go; but that won't necessarily mean that the "print" and "script" of cognition have to be entirely the same and indistinguishable.

In many ways I think that the struggle to build intelligent machines in our image has been an incredible boon to our own understanding of ourselves. One of the clearest examples is that the field set out trying to encode the way the conscious mind works, which led to the kinds of "if this, then that"-type programs that dominated computer science for many decades. This sort of "Good, Old-Fashioned AI," as it's called, pretty quickly came up against some fundamental limits. One can't really _articulate_ why or how one judges a blurry photograph to be a cat rather than a dog. In fact there's an enormous class of things that happen without the kind of logical, deductive, step-by-step reasoning process—language acquisition and use, cause-and-effect inference, almost everything involving the senses—and as it turns out this is where a lot of the true complexity of brains (not just human) reside.

In fact, I think it's one of the greatest and most praiseworthy revelations of the failure of GOFAI (and subsequent interest in neural networks, etc., and approaches modeled more on the brain than on the mind, per se) that it's shown that many of the very most complex and sophisticated things minds do are not, in fact, the things we typically thought of as "intelligent" behavior—deriving theorems, playing high-level chess, etc. They are in precisely the things that four-year-olds do. In this way, the project of AI has, in some ways more than anything, vindicated the cognitive complexity of babies and animals.

So, if anything, we have a radically less rigid view of intelligence than when we started working on AI. That, to me, is a big deal.

As to the question of why a computer must act like a human to be considered intelligent—well, I think the answer has a lot to do with who it is that's doing the considering!
Continue reading...