The Theory of Learning in Games
In economics, most noncooperative game theory has focused on equilibrium in games, especially Nash equilibrium and its refinements. The traditional explanation for when and why equilibrium arises is that it results from analysis and introspection by the players in a situation where the rules of the game, the rationality of the players, and the players' payoff functions are all common knowledge. Both conceptually and empirically, this theory has many problems.
In The Theory of Learning in Games Drew Fudenberg and David Levine develop an alternative explanation that equilibrium arises as the long-run outcome of a process in which less than fully rational players grope for optimality over time. The models they explore provide a foundation for equilibrium theory and suggest useful ways for economists to evaluate and modify traditional equilibrium concepts.
What people are saying - Write a review
We haven't found any reviews in the usual places.
Replicator Dynamics and Related Deterministic Models
Stochastic Fictitious Play and MixedStrategy Equilibria
Adjustment Models with Persistent Randomness
ExtensiveForm Games and Selfconfirming Equilibrium
Nash Equilibrium Large Population Models and Mutations
absolute continuity action adjustment agents assessment assumption asymptotically stable bandit problem basin Bayesian behavior rules beliefs best response best-response dynamic Binmore chapter choose condition consider continuous-time coordination game correlated equilibrium corresponding Cournot cycle denote deterministic Dirichlet distribution discount factor discrete-time discussed dominated strategies empirical distribution ergodic evolutionary example experimentation extensive-form games figure finite Fudenberg and Kreps Fudenberg and Levine game theory Games and Economic information set invariant distribution Kandori Kandori-Mailath-Rob learning models limit long-run matching pennies Mimeo mixed equilibrium mixed strategy mixed-strategy monotone Moreover mutations myopic Nash equilibrium non-Nash Note observe opponent's opponents play optimal outcome path period perturbations player 1's population positive probability prior proof Proposition pure strategy random rationalizable replicator dynamic result risk dominant sample Samuelson satisfy self-confirming equilibrium smooth fictitious play stable steady stimulus-response model stochastically stable strategy profile strictly dominated subgame Suppose that player switch symmetric tion unique utility