Paul Resnick, Professor at University of Michigan School of Information
Do people assess others' morality or judgment on their high-entropy positions or on their high surprisal positions?

On p. 253 of your book you muse about diffs and morality. In class, we discussed this. Should we assess Hilary Clinton's judgment with hindsight on her votes as a Senator on 95-5 decisions or on 52-48 decisions? Perhaps taking a high-surprisal position (voting with the 5) should get the highest weight in our assessment, followed by high-entropy positions (52-48 votes), with least weight going to her positions on 95-5 decisions. Politicians seem to suggest this at times, but I'm not sure how much the public buys it. What do you think?
Very interesting question, and not one I've thought about before in that way.

The first thought that came to my mind was the famous "trolley" thought experiments, where all sorts of strange moral conclusions emerge: for instance, asymmetries in how we judge someone for the consequences of taking an action versus the consequences of _not_ taking an action, that kind of thing. I remember having a discussion of Dick Cheney's line that, "Ultimately, I am the guy who pulled the trigger, who fired the round that hit Harry." Do we let him off the hook more the more interceding stages of cause-and-effect he can wedge in between himself and the person he shot? ("Ultimately, I'm the guy who pulled the trigger that released the hammer that impacted the primer that ignited the propellant that expelled the bullet that...")

There is, of course, the bigger distinction in ethics between judging people based on their intentions and based on the consequences of their actions. This is certainly relevant at the ballot box: I would consider voting for someone for mayor who has sensible municipal-level views but a position about State or Federal power that I consider nuts—because becoming mayor wouldn't really enable them to affect that.

Re. Clinton, perhaps ultimately we're asking two different questions. One question is about her character, and the other is about what would happen if she were elected. For the latter, we look at the close votes and places where she tipped the scales: entropy. For the former, we look at the places where she broke from the herd: surprisal.

There's a deep question that this leads into, which is whether being moral and living by a strict moral code are, in fact, antithetical. Deciding at some point in one's life to live by an elaborate moral code which determines all of one's future decisions is, in some way of thinking, a turning off of one's moral system. I think there's a deeper point here which leads back to AI, which is that most computer systems are designed to evaluate a set of options with respect to a predetermined set of criteria (an "objective function"). Most high-level human reasoning, and certainly moral reasoning is in this category in my view, exists at a meta level: evaluating the relative importance of one criterion versus another, and whether to scrap the existing objective function and assert a new one in its place.
Continue reading...