Neural Networks and Learning Machines, Volume 10Fluid and authoritative, this wellorganized book represents the first comprehensive treatment of neural networks and learning machines from an engineering perspective, providing extensive, stateoftheart coverage that will expose readers to the myriad facets of neural networks and help them appreciate the technology's origin, capabilities, and potential applications. KEY TOPICS: Examines all the important aspects of this emerging technology, covering the learning process, back propogation, radial basis functions, recurrent networks, selforganizing systems, modular networks, temporal processing, neurodynamics, and VLSI implementation. Integrates computer experiments throughout to demonstrate how neural networks are designed and perform in practice. Chapter objectives, problems, worked examples, a bibliography, photographs, illustrations, and a thorough glossary all reinforce concepts throughout. New chapters delve into such areas as support vector machines, and reinforcement learning/neurodynamic programming, Rosenblatt's Perceptron, LeastMeanSquare Algorithm, Regularization Theory, Kernel Methods and RadialBasis function networks (RBF), and Bayseian Filtering for State Estimation of Dynamic Systems. An entire chapter of case studies illustrates the reallife, practical applications of neural networks. A highly detailed bibliography is included for easy reference. 
What people are saying  Write a review
User ratings
5 stars 
 
4 stars 
 
3 stars 
 
2 stars 
 
1 star 

User Review  Flag as inappropriate
This is an excellent book with lastest adcances fully reflected.
User Review  Flag as inappropriate
ok
Common terms and phrases
annealing applied approximation attractor backpropagation backpropagation algorithm Boltzmann machine Chapter computational condition convergence cost function covariance matrix data points defined denote derivative described desired response differential entropy dimensionality discussed distribution dynamic programming dynamic system eigenvalue equation error estimate examples feedback FIGURE formula Gaussian gradient hidden layer hidden neurons Hopfield input space input vector input–output iteration Kalman filter kernel kernel PCA Kullback–Leibler divergence learning algorithm learningrate parameter leastsquares linear LMS algorithm Lyapunov Markov chain method minimize multilayer perceptron mutual information neural network neuron nodes noise nonlinear observation optimal output layer particle filter pattern performance probability density function problem pX(x random variable random vector RBF network recurrent network regularization respect result Section selforganizing selforganizing map shown signal statistical stochastic supervised learning support vector machine synaptic weights theorem theory tion training sample update weight vector zero