Neural Networks and Learning Machines, Volume 10
Fluid and authoritative, this well-organized book represents the first comprehensive treatment of neural networks and learning machines from an engineering perspective, providing extensive, state-of-the-art coverage that will expose readers to the myriad facets of neural networks and help them appreciate the technology's origin, capabilities, and potential applications. KEY TOPICS: Examines all the important aspects of this emerging technology, covering the learning process, back propogation, radial basis functions, recurrent networks, self-organizing systems, modular networks, temporal processing, neurodynamics, and VLSI implementation. Integrates computer experiments throughout to demonstrate how neural networks are designed and perform in practice. Chapter objectives, problems, worked examples, a bibliography, photographs, illustrations, and a thorough glossary all reinforce concepts throughout. New chapters delve into such areas as support vector machines, and reinforcement learning/neurodynamic programming, Rosenblatt's Perceptron, Least-Mean-Square Algorithm, Regularization Theory, Kernel Methods and Radial-Basis function networks (RBF), and Bayseian Filtering for State Estimation of Dynamic Systems. An entire chapter of case studies illustrates the real-life, practical applications of neural networks. A highly detailed bibliography is included for easy reference.
What people are saying - Write a review
Other editions - View all
annealing applied approximation attractor back-propagation back-propagation algorithm Boltzmann machine Chapter computational condition convergence cost function covariance matrix data points defined denote derivative described desired response differential entropy dimensionality discussed distribution dynamic programming dynamic system eigenvalue equation error estimate examples feedback FIGURE formula Gaussian gradient hidden layer hidden neurons Hopfield input space input vector input–output iteration Kalman filter kernel kernel PCA Kullback–Leibler divergence learning algorithm learning-rate parameter least-squares linear LMS algorithm Lyapunov Markov chain method minimize multilayer perceptron mutual information neural network neuron nodes noise nonlinear observation optimal output layer particle filter pattern performance probability density function problem pX(x random variable random vector RBF network recurrent network regularization respect result Section self-organizing self-organizing map shown signal statistical stochastic supervised learning support vector machine synaptic weights theorem theory tion training sample update weight vector zero