Pattern classificationThe first edition, published in 1973, has become a classic reference in the field. Now with the second edition, readers will find information on key new topics such as neural networks and statistical pattern recognition, the theory of machine learning, and the theory of invariances. Also included are worked examples, comparisons between different methods, extensive graphics, expanded exercises and computer project topics. An Instructor's Manual presenting detailed solutions to all the problems in the book is available from the Wiley editorial department. 
From inside the book
25 pages matching posterior probabilities in this book
Where's the rest of this book?
Results 13 of 25
What people are saying  Write a review
User ratings
5 stars 
 
4 stars 
 
3 stars 
 
2 stars 
 
1 star 

User Review  Flag as inappropriate
excelent
Review: Pattern Classification
User Review  Kid  GoodreadsGreat introduction to pattern recognition/classification, but it is sometimes poorlyorganized. Read full review
Contents
A  1 
MAXIMUMLIKELIHOOD AND BAYESIAN  84 
4 NONPARAMETRIC TECHNIQUES  161 
Copyright  
15 other sections not shown
Other editions  View all
Common terms and phrases
annealing applied approach arbitrary assume backpropagation Bayes Bayesian bias binary calculate Chapter clusters component classifiers Computer exercise configuration Consider convergence corresponding covariance matrix criterion function data set decision boundary decision rule denote derivation dimensional dimensions discriminant function distance distribution entropy equation error rate example feature space FIGURE Gaussian given gradient descent grammar Hessian matrix Hidden Markov Models hidden units hyperplane impurity independent iteration labeled large number learning rate linear discriminant linearly separable maximumlikelihood estimate mean methods minimize minimum mixture density nearestneighbor neural networks node nonlinear normal number of samples obtain optimal output units P(co parameters particular pattern recognition Perceptron posterior posterior probabilities prior probabilities problem procedure randomly Section sequence Show shown simple solution split statistical statistically independent stochastic Suppose Theorem tion training data training error training patterns training samples training set tree twocategory unsupervised learning variance weight vector zero