John Wiley & Sons, Nov 9, 2012 - Technology & Engineering - 680 pages
The first edition, published in 1973, has become a classic reference in the field. Now with the second edition, readers will find information on key new topics such as neural networks and statistical pattern recognition, the theory of machine learning, and the theory of invariances. Also included are worked examples, comparisons between different methods, extensive graphics, expanded exercises and computer project topics.
An Instructor's Manual presenting detailed solutions to all the problems in the book is available from the Wiley editorial department.
What people are saying - Write a review
We haven't found any reviews in the usual places.
2 BAYESIAN DECISION THEORY
3 MAXIMUMLIKELIHOOD AND BAYESIAN PARAMETER ESTIMATION
4 NONPARAMETRIC TECHNIQUES
5 LINEAR DISCRIMINANT FUNCTIONS
6 MULTILAYER NEURAL NETWORKS
7 STOCHASTIC METHODS
annealing approach assume backpropagation Bayes Bayesian beneﬁt bias binary Boltzmann calculate Chapter classiﬁer classify clusters component classiﬁers conﬁguration Consider convergence corresponding covariance matrix criterion function d-dimensional data set decision boundary decision rule deﬁned deﬁnition denote derivation difﬁcult discriminant function distance distribution entropy error rate FIGURE ﬁnal ﬁnd ﬁnding ﬁnite ﬁrst ﬁsh ﬁxed Gaussian given gradient descent grammar Hessian matrix Hidden Markov Models hidden units hyperplane impurity inﬁnite iteration labeled large number linear discriminant linearly separable maximum-likelihood estimate mean methods minimize minimum mixture density nearest-neighbor neural networks node nonlinear normal number of samples obtain optimal output units parameters pattern recognition Perceptron posterior probabilities prior probabilities procedure random Section sequence Show shown solution speciﬁc split statistical stochastic sufﬁcient Suppose Theorem tion training data training error training patterns training samples training set tree two-category unsupervised learning variables variance weight vector zero