The fundamental flaw in applying game theory to biology is to constrain the possible outcomes when unfairness is detected. In game theory, all you can do is change your strategy for the next round; in reality, we have multiple punishments to bring to bear, such as refusing to play, public shaming, fines, imprisonment, killing, etc.
Game theory is only an "ok" model of cooperation/competition in real life.
Also worth mentioning that while the classic game theory approach is to calculate the optimum strategy, there's also evolutionary game theory, which looks at what strategies could actually be found by an evolutionary process. They tend to be the same, but not always.
"Choosing among a set of games" seems to me to be just a bigger game. "Known histories" can include any iterated game, which is a basic part of game theory. It's also studied a lot in the context of poker, where the challenge is that if you deviate from the game-theory optimal strategy to better exploit your opponent, you open yourself up to exploitation.
For the most part, people cooperate when the outcome of cooperating puts them at a better place. In the same way everyone is unique but they copy proven paths to achieve certain ends or move to certain stages where they can eventually break off and get ahead personally. In the same way groups suck but we all live better that there are other people.
All of life is similar, self-interested for survival, but they want a place of their own and territory, eventually.
I find it fascinating that twisting the parameters changes the outcome. Thinking about our social world, for example fraud in finance, perhaps there is something to be learned about the way the rules of society should be set up to eliminate cheating.
Game theory is a modeling tool that assumes all relevant utility is baked into the payoff matrix.
Games (payoff matrices) that capture unique outcomes/behaviors are often given memorable names from human situations that approximate the games. For example the matrix referred to as 'prisoners dilemma' demonstrates a situation where dominant actions give a suboptimal outcome. In describing where this game may apply, economists found simultaneous interrogations to be a colorful and close enough approximation to the idealized game.
It's not the case that game theorists started with the situation of interrogating prisoners and ended up with the game matrix.
Similarly, for anyone saying that 'that wouldn't happen in real life', you're actually saying that the payoffs don't accurately model the outcomes.
Correct, and they mostly use the prisioner's dilemma, which is very simplistic, and probably not what it happens in nature (or common situations) most of the time.
Consider the effort it takes 1 person to build a house, or the amount of effort it would take to build an mp3 player alone from scratch.
Game theory would have you believe the optimal way to win at poker is to booby trap the card table and rob your competition.
Seriously, this is not that hard. If the prey is relatively mobile, for most predators the gig is up if the prey is aware of them. If the prey is alerting it's friends that a predator is around, then that particular animal has seen the predator and is usually therefore relatively safe from it. For example, if an angry dog is barrelling towards your unsuspecting friend, shouting out may draw attention towards you, but you can shout and take countermeasures at the same time. It's not a zero-sum game, and you don't need hundreds of iterations to make it beneficial to yourself. I mean, hell, watch a random Attenborough nature special, and you're likely to hear him talk about the prey spotting the predator, leading to an abandonment of the hunt.
The strange maths continues with the "Bat's Dilemma" example. In the case where both bats do the same thing, share or not share, there are disconcordant outcomes. Same population of bats, same amount of food available, yet somehow there is much more hunger if they don't share. This would only make sense if each bat only occasionally had a meal from a source which was much larger than it could eat by itself, then had a long period without finding food... in which case, sharing the excess really isn't a dilemma, since it's excess. I really don't understand the maths in that example.
Game theory really does seem to be a hammer desperately searching for anything that looks like it might possibly be a nail. It is interesting that the end of the article says it has some suitability for microbe research, where things are much more stimulus/response and much less complex.
No, that is not right, if it was there would be no dilemma, and this subject would not be discussed ad nauseam. The whole point is that the situation is symmetrical for both players, they should reach the same conclusion and act the same way... and using the strategy of cooperation their outcome is better than defecting.
Game Theory models strategic situations and doesn't offer insight outside what is modeled. If you think there should be communication supporting cooperation in the game, that game is NOT the Prisoners Dilemma and is in fact another game.
The Prisoners Dilemma is a model that is stacked very much against cooperation. Think about it, the prisoners are held in separate rooms and not allowed to communicate at all in the original story.
A real game show [1] put two people in a position of the prisoners dilemma and they were free to communicate. [2]
There is no moral choice-making for maximizing rational actors, and both actors in the PD have exactly the same information, including the fact that the other individual is a maximizing rational actor. As such the off diagonal elements of the payoff matrix are irrelevant to any rational decision making because any two rational actors with the same goal will always make the same choice in the same situation. To do anything else would be irrational.
So within the frame of the theory both players know with certainty that because the game is being played by maximizing rational actors that the other player will always do exactly what they do. This is true no matter what they do: the other player will always reach the same conclusion. Rationality dictates it, if rationality means anything at all.
It is only when you smuggle in the possibility of an irrational choice on the part of one of the players that the off-diagonal elements become relevant, because one player can for unaccountable reasons choose to do something irrational, which a maximizing rational actor would never do.
Game theory is not about people. It's about rational actors who want to maximize their payoff. For such entities there is no dilemma, since only the diagonal elements of the matrix matter, and cooperation is the obvious maximizing strategy.
Unfortunately, game theory under this constraint becomes very boring. There is probably a salvagable variant of it that remains interesting, but I'm honestly not sure what it's a theory of. "Semi-rational not-very-smart actors"? That would describe humans reasonably well, I guess. It certainly describes me. Or maybe the decision-maker being analyzed could be considered a rational actor and the rest of the players irrational, although that would be equivalent to playing against a random number generator.
http://en.wikipedia.org/wiki/Prisoner%27s_dilemma#The_iterat...
Since both players think this way, they both defect and get a bad outcome.
Real prisoners don't always defect, but the reason is that they're not actually playing the prisoner's dilemma. The payoff matrix has been altered....eg., by severe penalties for ratting, which give the player a much worse outcome for defecting. Those penalties wouldn't be necessary if the dilemma weren't real. (Another alteration would be if the prisoners care enough about each other so each sees it as a bad outcome if the other suffers. This isn't contrary to game theory, it's just a different set of payoffs.)
In fact I think it's more interesting to find the systems in which a given strategy is successfull as opposed to finding a successfull strategy given the system. There's all sorts of interesting questions that arise from this point of view the most obvious one being how does one (or animal populations) change the inputs that form the system in such a way that it leads to the given strategy being successfull.
Carrot and stick: The two poles of community (of the community magnet).
This is wrong and should significantly reduce any trust you may have had in the journalist who wrote the piece. Kin selection is about how helping family members helps the genes that make the individual. Behaviour will spread if it increases inclusive fitness. If you can save a sibling (coefficient of relatedness 0.5) with probability 1 and the chance of you dying is 0.4 you do it. If P(death) is 0.51 or higher you don't.
>Group selection proposes that cooperative groups may be more likely to survive than uncooperative ones.
The conditions necessary for group selection are incredibly strong and very rarely occur in practice in biological settings. When they do you get hive organisms like naked mole rats or the Hymenoptera. There is stronger evidence for group selection in cultural evolution than in most of biology.
Further reading http://lesswrong.com/lw/kw/the_tragedy_of_group_selectionism
>“As mutations that increase the temptation to defect sweep through the group, the population reaches a tipping point,” Plotkin said. “The temptation to defect is overwhelming, and defection rules the day.”
>Plotkin said the outcome was unexpected. “It’s surprising because it’s within the same framework — game theory — that people have used to explain cooperation,” he said. “I thought that even if you allowed the game to evolve, cooperation would still prevail.”
>The takeaway is that small tweaks to the conditions can have a major effect on whether cooperation or extortion triumphs. “It’s quite neat to see that this leads to qualitatively different outcomes,” said Jeff Gore, a biophysicist at the Massachusetts Institute of Technology who wasn’t involved in the study. “Depending on the constraints, you can evolve qualitatively different kinds of games.”
Mathematicians develop model that gives us a deeper understanding of the shallowness of our understanding of cooperation.
Unfortunately my math isn't strong enough to understand the paper but you'll get a much better understanding of how game theory applies to biology from The Selfish Gene by Richard Dawkins than from this article.
Don't read anything by Stephen Jay Gould http://pleiotropy.fieldofscience.com/2009/02/krugman-on-step...
http://en.wikipedia.org/wiki/Stephen_Jay_Gould#The_Mismeasur... In 2011, a study conducted by six anthropologists reanalyzed Gould's claim that Samuel Morton unconsciously manipulated his skull measurements,[82] and concluded that Gould's analysis was poorly supported and incorrect. They praised Gould for his "staunch opposition to racism" but concluded, "we find that Morton's initial reputation as the objectivist of his era was well-deserved."[83] Ralph Holloway, one of the co-authors of the study, commented, "I just didn't trust Gould. ... I had the feeling that his ideological stance was supreme. When the 1996 version of 'The Mismeasure of Man' came and he never even bothered to mention Michael's study, I just felt he was a charlatan."[84] The group's paper was reviewed in the journal Nature, which recommended a degree of caution, stating "the critique leaves the majority of Gould's work unscathed," and notes that "because they couldn't measure all the skulls, they do not know whether the average cranial capacities that Morton reported represent his sample accurately."[85] The journal stated that Gould's opposition to racism may have biased his interpretation of Morton's data, but also noted that "Lewis and his colleagues have their own motivations. Several in the group have an association with the University of Pennsylvania, and have an interest in seeing the valuable but understudied skull collection freed from the stigma of bias."