Computer Vision: Models, Learning, and InferenceThis modern treatment of computer vision focuses on learning and inference in probabilistic models as a unifying theme. It shows how to use training data to learn the relationships between the observed image data and the aspects of the world that we wish to estimate, such as the 3D structure or the object class, and how to exploit these relationships to make new inferences about the world from new image data. With minimal prerequisites, the book starts from the basics of probability and model fitting and works up to real examples that the reader can implement and modify to build useful vision systems. Primarily meant for advanced undergraduate and graduate students, the detailed methodological presentation will also be useful for practitioners of computer vision. - Covers cutting-edge techniques, including graph cuts, machine learning, and multiple view geometry. - A unified approach shows the common basis for solutions of important computer vision problems, such as camera calibration, face recognition, and object tracking. - More than 70 algorithms are described in sufficient detail to implement. - More than 350 full-color illustrations amplify the text. - The treatment is self-contained, including all of the background mathematics. - Additional resources at www.computervisionmodels.com. |
Contents
Introduction | 1 |
I Probability | 7 |
II Machine learning formachine vision | 53 |
III Connecting local models | 171 |
IV Preprocessing | 267 |
V Models for geometry | 295 |
VI Models for vision | 385 |
VII Appendices | 505 |
Other editions - View all
Common terms and phrases
algorithm Analysis & Machine approach argmax basis functions Bayes Bayesian binary camera categorical distribution chapter classification closed form Computer Vision conditional independence consider contour covariance data point defined density describe descriptor diagonal discrete E-step edges efficient epipolar line Equation essential matrix estimate factor Figure final find first fit fitting fixed Gaussian gradient graph graphical model hence hidden variable homography identity IEEE Computer inference intrinsic parameters Kalman filter label landmark points latent Dirichlet allocation learning linear logistic regression marginal Markov matrix maximum likelihood method node nonlinear normal distribution object observed data optimization Pattern Recognition pinhole pinhole camera model pixel plane position possible posterior distribution posterior probability Pr(w prediction prior probability distribution problem random field RANSAC region regression model rotation samples scene Section solution t-distribution term training data transformation undirected graphical models update values variable h variance vector visual weighted Wsion zero