## Controlled Markov Processes and Viscosity SolutionsThisbookisintendedasanintroductiontooptimalstochasticcontrolforc- tinuoustimeMarkovprocessesandtothetheoryofviscositysolutions. We- proach stochastic control problems by the method of dynamic programming. The fundamental equation of dynamic programming is a nonlinear evolution equation for the value function. For controlled Markov di?usion processes on n - dimensional euclidean space, the dynamic programming equation becomes a nonlinear partial di?erential equation of second order, called a Hamilton – Jacobi – Bellman (HJB) partial di?erential equation. The theory of visc- ity solutions, ?rst introduced by M. G. Crandall and P. -L. Lions, provides a convenient framework in which to study HJB equations. Typically, the value functionisnotsmoothenoughtosatisfytheHJBequationinaclassicalsense. However,underquitegeneralassumptionsthevaluefunctionistheuniquev- cosity solution of the HJB equation with appropriate boundary conditions. In addition, the viscosity solution framework is well suited to proving continuous dependence of solutions on problem data. The book begins with an introduction to dynamic programming for - terministic optimal control problems in Chapter I, and to the corresponding theory of viscosity solutions in Chapter II. A rather elementary introduction todynamicprogrammingforcontrolledMarkovprocessesisprovidedinCh- ter III. This is followed by the more technical Chapters IV and V, which are concerned with controlled Markov di?usions and viscosity solutions of HJB equations. We have tried, through illustrative examples in early chapters and the selection of material in Chapters VI – VII, to connect stochastic c- trol theory with other mathematical areas (e. g. large deviations theory) and with applications to engineering, physics, management, and ?nance. Chapter VIII is an introduction to singular stochastic control. |

### What people are saying - Write a review

We haven't found any reviews in the usual places.