Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control
* Unique in its survey of the range of topics.
* Contains a strong, interdisciplinary format that will appeal to both students and researchers.
* Features exercises and web links to software and data sets.
What people are saying - Write a review
2 Direct Methods for Stochastic Search
3 Recursive Estimation for Linear Models
4 Stochastic Approximation for Nonlinear RootFinding
5 Stochastic Gradient Form of Stochastic Approximation
6 Stochastic Approximation and the FiniteDifference Method
7 Simultaneous Perturbation Stochastic Approximation
8 AnnealingType Algorithms
15 SimulationBased Optimization II Stochastic Gradient and Sample Path Methods
16 Markov Chain Monte Carlo
17 Optimal Design for Experimental Inputs
Appendix A Selected Results from Multivariate Analysis
Appendix B Some Basic Tests in Statistics
Appendix C Probability Theory and Convergence
Appendix D Random Number Generation
Appendix E Markov Processes
9 Evolutionary Computation I Genetic Algorithms
10 Evolutionary Computation 11 General Methods and Theory
11 Reinforcement Learning via Temporal Differences
12 Statistical Methods for Optimization in Discrete Problems
13 Model Selection and Statistical Information
14 SimulationBased Optimization I Regeneration Common Random Numbers and Selection Methods
Other editions - View all
algorithm analysis Appendix applications approach asymptotic averaging basic batch Chapter chromosomes coefﬁcients components computed Consider convergence covariance matrix criterion CRNs cross-validation D-optimal deﬁned deﬁnition density deterministic difﬁcult discussed in Section efﬁciency elements error evaluations example FDSA ﬁnal ﬁnd ﬁnding ﬁnite ﬁrst ﬁtness ﬁxed gain sequence Gibbs sampling global gradient approximation gradient estimate Hence Hessian Hessian matrix implementation independent information matrix input iterations linear models loss function loss measurements loss values Markov Markov chain methods Monte Carlo multiple comparisons multivariate noise noise-free noisy measurements nonlinear normally distributed null hypothesis optimal design output P-value parameters perturbation precision matrix prediction probability provides random numbers random search random variables recursive reﬂect relative replications represents root-ﬁnding sample mean satisﬁed scalar search and optimization signiﬁcant simulated annealing simulation solution speciﬁc SPSA statistical step stochastic approximation stochastic gradient stochastic search Subsection sufﬁcient supervised learning Suppose Theorem theory variance
Page 561 - Adaptive identification and control algorithms for nonlinear bacterial growth systems.
Page 6 - Because of the inherent limitations of the vast majority of optimization algorithms, it is usually only possible to ensure that an algorithm will approach a local minimum with a finite amount of resources being put into the optimization process. However, since the local minimum may still yield a significantly improved solution (relative to no formal optimization process at all), the local minimum may be a fully acceptable solution for the resources available (human time, money, computer time, etc.)...