User Review - Flag as inappropriate
It has been a long way since 1995, and many new techniques and important developments have taken place in the field of A.I. and more concretely, machine learning. Still, this book has aged very well, for two reasons: first, the fundamental techniques and concepts that every practitioner must understand and be able to make use of, like for example parametric techniques for density estimation (kNN), dimensionality reduction (PCA), mixture models, in addition to, of course, neural networks. Second, this book paves the way for moving on to modern techniques like deep energy models and deep belief networks with its last chapter on bayesian techniques.
The explanations are clear and amenable to read. Properties of and advances based on neural networks are presented in a principled way in the context of statistical pattern recognition. The exercises are wisely chosen to ensure the understanding of the presented results, and under what conditions they were derived.
But this book goes beyond theory, A chapter is devoted to optimization techniques, i.e. what algorithms are used to train neural networks in practice. After reading that chapter and going through the exercises you will have a good understanding of the conjugate gradients and LFGB.
The chapter on how to improve generalization, either by optimizing the structure of the network or by combining multiple classifiers is keep at a intuitive level, yet the concepts are well motivated and the few mathetical details help achieving a solid grasp of why do those ideas work. As in the rest of chapters, it is explained how to carry out it in practice, i.e. how I can proofcheck, if my classifier has become better. At the end of the chapter the reader is familiar with the concept of regularization (weight decay), cross validation and bagging.
User Review - Flag as inappropriate
This book came out at about the same time as Ripley's, which has almost the same title, but in reverse. At the time, I liked Ripley's better, because it covered more things that were totally new to me. Then a friend said he had chosen Bishop for a course he was teaching, and I went back and reconsidered the two books. I soon found that my friend was right: Bishop's book is better laid out for a course in that it starts at the beginning (well, not quite the beginning--you do need to be fairly sophisticated mathematically) and works up, while Ripley's is more a collection of insights all at the same level; confusing to learn from. Bishop is able to cover both theoretical and practical aspects well. There certainly are topics that aren't covered, but the ones that are there fit together nicely, are accurate and up to date, and are easy to understand. It has migrated from my bookcase to my desk, where it now stays, and I reach for it often. To the reviewer who said "I was looking forward to a detailed insight into neural networks in this book. Instead, almost every page is plastered up with sigma notation", that's like saying about a book on music theory "Instead, almost every page is palstered with black-and-white ovals (some with sticks on the edge)." Or to the reviewer who complains this book is limited to the mathematical side of neural nets, that's like complaining about a cookbook on beef being limited to the carnivore side. If you want a non-technical overview, you can get that elsewhere, but if you want understanding of the techniques, you have to understand the math. Otherwise, there's no beef.