The little story called The
Prisoner's Dilemma ignores just about every fact about a real

1. The situation is treated as a two person
game. But there are obviously many more
than two people involved. First of all,
there are the cops who are putting the squeeze on the prisoners. In the real world, they are an important part
of the situation, and real prisoners will try, quite rationally, to figure out
whatever they can about the cops that will help them make their decision. Furthermore, in the American justice system,
the prisoners will have lawyers. So at a
bare minimum, this is a five person game [one cop, two prisoners, two
lawyers]. *Law and Order*type situation that could possibly be relevant to thinking about it. Let us look at just a few of the things that are assumed away.2. To force the story into a 2 x 2 matrix, one must suppose that each player has only two strategies. Recall what I said about how extraordinarily simple a game must be to offer only two strategies to each player. In the real world, there will be an arraignment, and there will be some jockeying over venue and date of trial and which judge is going to hear the case and whether to opt for a jury trial or go for a bench trial. Lots of moves, therefore lots of strategies, therefore no 2 x 2 matrix.

3. To make the story fit the matrix ["the punishment fit the crime"], we must abstract from every important fact about the two criminals, including sex, race, religion, personal relationship, past history with the criminal justice system, and so on and on, and then we must assume, against all plausibility, that each criminal will rank the outcomes purely on the basis of the length of the jail sentence to himself or herself.

Now, if we could, by doing all of this, draw conclusions whose validity is totally independent of all the details we have abstracted from, just as the validity of geometric calculation is independent of the color of the shapes whose area we are computing, then we would indeed have a very powerful tool for the analysis of economic, political, legal, and military problems. It would be a tool that could both help us to predict how people

*will*act and also enable us to prescribe how rational individuals

*should*act. But in fact, what remains when we have stripped away all the detail necessary to reduce a complex situation to a 2 x 2 matrix is a structure that neither assists in prediction nor guides us in prescription.

If we focus simply on the formal structure of a two person game with two pure strategies for each player, it is obvious that there are 24 different orders in which each player can rank the four outcomes, setting to one side for the moment the possibility of indifference. How do I arrive at this number? Simple. A [or B] has four choices for the number one spot in the ranking. For each of these, there are three possibilities for the number two spot. There are then two ways of choosing among the remaining two outcomes for the number three spot, at which point the remaining outcome is ranked number four. 4 x 3 x 2 x 1 = 24. Since A's rankings are logically independent of B's rankings, there are 24 x 24 = 576 possible combinations of rankings by A and B of the outcomes of the four possible strategy pairs. The Prisoner's Dilemma is simply one of those 576, to which a story has been attached.

People enamored of this sort of thing have thought up little stories for some of the other possible pairs of rankings. [The following examples come from the pages of Baird, Gertner, and Picker, mentioned earlier]. For example, the following pair has had attached to it a story about The Battle of the Sexes [now fallen into disfavor for reasons of political correctness]:

A: O21 > O12 > O22 > O11

B: O21 > O12 > O11 > O22

Another pair of preference orders has a story about collective bargaining attached to it:

A: O21 > O11 > O12 > O22

B: O12 > O11 > O21 > O22

If we allow for indifference, then there are lots more possible pairs of preference orders. Here is one that has a story attached to it called The Stag Hunt:

A: O11 > O21 = O22 > O12

B: O11 > O12 = O22 > O21

I have no doubt that with sufficient time and imagination, one could think up many more stories to attach to yet other pairs of ordinal rankings of the four outcomes in a game with two pure strategies for each player. None of these little preference structures really models, in a useful way, relations between men and women, or collective bargaining, or stag hunts [since matching pennies really is a game, with all the simplifications and rules and such that characterize games, there is no reason at all why a Game Theoretic analysis should not be useful in understanding it, but one doesn't often encounter real world situations, even in Las Vegas casinos, where people are engaged in matching pennies.]

What is the upshot of this rather bilious discussion of The Prisoner's Dilemma? Put simply, it is this: The abstractions and simplifications required to transform a real situation of choice, deliberation, conflict, and cooperation into a two-person game suitable for Game Theoretic analysis fail to identify formal or structural features of the situation that are, at one and the same time, essential to the nature of the situation and independent of the facts or characteristics that have been set aside in the process of simplification. That, after all, is what does happen when we reduce an informal argument to a syllogism. Consequently, anything we can infer from the formal syllogistic structure of the argument must hold true for the full argument, once the content we have abstracted from is reintroduced.

Just to make sure this point is clear: Suppose I come upon a text in which the author tries to establish that some Republicans are honorable. She begins, we may suppose, by noting that all Republicans are Americans, and then offers evidence to support that claim the some Americans are honorable, whereupon she concludes that some Republicans are honorable. When we convert this to syllogistic form, it becomes: All A are B. Some B are C. Therefore, Some A are C. Thus separated from its content, the argument is quickly seen to be invalid [although, let us remember, that fact does

**not**imply that the conclusion is false, only that it has not been established by the argument. Fair is fair.] The As that are B may not be among the Bs that are C. [Venn diagrams, anyone?] In this case, the abstraction required to convert the informal argument into syllogistic form succeeds in identifying a formal structure of the original argument. Hence the formal analysis is valid.

But in the case of the Prisoner's Dilemma, essential elements of the original situation must be simplified away, removing aspects of the situation that are structurally essential to it. The result is not to lay bare the underlying formal structure of the original situation, but rather to substitute for the original situation another, simpler situation that can be exhibited in appropriate Game Theoretic form. The reasoning concerning this new situation is correct, but there is no reason to suppose that it applies as well to the original situation.

Conclusion: Be not beguiled by 2 x 2 matrices.

## 7 comments:

Thanks for these 3 posts; very well done.

this makes a nice pairing with your posts (it's very funny, it's not quite the prisoners' dilemma, and it's compatible with your critique...)

http://www.youtube.com/watch?v=S0qjK3TWZE8

Hello! Long time listener, approximately first-time caller.

I have a question.

I've studied some of the discussions in the traditional literature about this problem. I've read through your comments about it. I think it's a really great collection of posts: thank you.

Now for the question. Suppose my interest in the Prisoner's Dilemma has nothing to do with a confusion about whether it is meant to be descriptive of actual situations of rational choice, or whether it is meant to be a prescription for how to choose in those situations. That is to say, I'm making neither of the mistakes you mention explicitly, I think.

Instead, my interest is this: It seems to me intuitive that adopting a rational strategy for choosing between options I have on my own ought to be compatible, or the same as, or interestingly related to my strategy for choosing between options I have when I am part of a group. One such strategy that seems should work is a "dominance" strategy. Given all of the rich complexities and details of a real-life situation, it seems to me still that I ought to be able to articulate a rational strategy for decision-making that would work -- and the one I choose on my own, I would hope, could harmonize usefully when used in a group, even if the strategy might change somewhat in that situation.

I wonder if you reject even that, but I'm not sure -- I don't think it's actually discussed explicitly in the text. In any case, do you think I'm misguided if I'm interested in the Prisoner's Dilemma because it seems to me that it shows something interesting, and perhaps a little troubling, about the fact that plausible rational strategies that work well when I'm reasoning on my own seem to fail when generalized to a group?

David, I took a look at the YouTube link. It is marvelous. It is actually an example of what Schelling calls a coordination game, a lovely one. Schelling has a great discussion of them in THE STRATEGY OF CONFLICT.

A complete aside - I do a fair bit of programming and statistics in my spare time. In my travels I often have to maximize a function that comes from the "real world" - meaning, it isn't nicely behaved like a line, or even stable over time. To maximize it, a favourite trick is simulated annealing. Basically, you have two states: (1) the rational state where you basically "walk uphill" from where ever you are located till you get to the top of the hill; or (2) once you get to the top of a hill, you take a random jump somewhere and repeat the rational state to see if you can do better (you keep track of your previous best and return to it if the random jump didn't help). As the algorithm runs, you make the random jumps smaller and smaller - at which point you are (probably) at a global maximum.

The key here is the counter-intuitive "leap of faith" stage. A lot of logical analysis is of the rational type - keep marching uphill - but the algorithm requires those random leaps to make sure you don't get caught in a local minimum.

I've found myself using that algorithm as a nice metaphor (or excuse, depending on the situation) for NOT doing the logical thing. A lot of life seems like we are built with simulated annealing machinery built into us. But mainly, it gave me the insight that logical thinking can be self-defeating because it tends to be very limited in its timeframe.

In the prisoner's dilemma case, I'm sure that it comes up all the time, with all the complexity and messiness you point out. But I think about a human response to this - honour, friendship and love. If the two criminals are friends, or even lovers, then they sidestep the logical thing to do and jump straight to the optimal solution - don't squeal!

I've gotten a fair kick out of the prisoners dilemma over time thinking about it in this way - logic is a tool, but certainly not the only one.

That is fascinating. It calls to mind tghe following fact about Two Person Games: If the game is zero sum [see my elaborate explanatgion of exactly what "zero sum" actually means -- it is quite p[recisely defined by von N eumann] then there can not be a lolcal maximum. All maxima are equal, hence the players are indifference among them. But in a two person game that is not zero sum, there can be a nujmber oif local maxima, many of which are inferior to the greatest local maximum. In that c ase, players could quite easily get stuck at a local maximum, because a small move by either one away from that point would be inferior for the player.

I read somewhere that game theory is only good for predicting the behavior of game theorists. I think it’s also good for disarming a lot of bad free-market triumphalism. That’s what I picked up from Sam Bowles’s "Microeconomics." Whether the free market works its magic or not depends on the situation. Bowles treats mutually beneficial trade as just one of many possible strategic interactions (he calls it the invisible hand, and it’s lumped in with the prisoner’s dilemma and the stag hunt). He’s got a great example:

Like the overnight train that left me in an empty field some distance from the settlement, the process of economic development has for the most part bypassed the two hundred or so families that make up the village of Palanpur. They have remained poor, even by Indian standards: less than a third of the adults are literate, and most have endured the loss of a child to malnutrition or to illnesses that are long forgotten in other parts of the world. But for the occasional wristwatch, bicycle, or irrigation pump, Palanpur appears to be a timeless backwater, untouched by India’s cutting edge software industry and booming agricultural regions. Seeking to understand why, I approached a sharecropper and his three daughters weeding a small plot. The conversation eventually turned to the fact that Palanpur farmers sow their winter crops several weeks after the date at which yields would be maximized. The farmers do not doubt that earlier planting would give them larger harvests, but no one the farmer explained, is willing to be the first to plant, as the seeds on any lone plot would be quickly eaten by birds. I asked if a large group of farmers, perhaps relatives, had ever agreed to sow earlier, all planting on the same day to minimize losses. “If we knew how to do that,” he said, looking up from his hoe at me, “we would not be poor.”

Post a Comment