Mean field games and control theory

Peter Caines
McGill University

Multi-agent competitive and cooperative systems occur in a vast range of designed and natural settings. Inspired by statistical mechanics, Mean Field Game (MFG) theory studies the existence of Nash equilibria, together with the individual strategies which generate them, in games involving a large number of agents each modelled by a controlled stochastic dynamical system. This is achieved for problems which would be intractable in terms of conventional game theory by exploiting the relationship between the finite and corresponding infinite limit population problems.

The equilibria of the infinite population problems are obtained from the fundamental MFG Hamilton-Jacobi-Bellman (HJB) and Fokker-Planck-Kolmogorov (FPK) equations, or, equivalently, the systems' McKean-Vlasov stochastic differential equations (Huang, Caines, Malhamé, 2003, 2006, 2007, Lasry, Lions, 2006, 2007); these equations are linked by the state distribution of a generic agent, otherwise known as the system's mean field. For large population stochastic dynamic games individual feedback strategies depending on the mean field exist such that any given agent will be approximately in a Nash equilibrium with respect to the mass of the other agents.

In this talk we shall present (i) the basic notions of MFG control theory including the explicitly solvable linear-quadratic case, (ii) outline some of the extensions of the theory to adaptive, egoist-altruist, flocking and coalition seeking behaviours together with the theory of major-minor agent MFG control, and (iii) indicate open problem areas.