Neural Networks and Learning Machines, Volume 10For graduate-level neural network courses offered in the departments of Computer Engineering, Electrical Engineering, and Computer Science. Neural Networks and Learning Machines, Third Edition is renowned for its thoroughness and readability. This well-organized and completely up-to-date text remains the most comprehensive treatment of neural networks from an engineering perspective. This is ideal for professional engineers and research scientists. Matlab codes used for the computer experiments in the text are available for download at: http: //www.pearsonhighered.com/haykin/ Refocused, revised and renamed to reflect the duality of neural networks and learning machines, this edition recognizes that the subject matter is richer when these topics are studied together. Ideas drawn from neural networks and machine learning are hybridized to perform improved learning tasks beyond the capability of either independently. |
Contents
M00_HAYK1399_SE_03_INTR | 1 |
M01_HAYK1399_SE_03_C01 | 47 |
M02_HAYK1399_SE_03_C02 | 68 |
Copyright | |
15 other sections not shown
Other editions - View all
Common terms and phrases
annealing applied approximation attractor back-propagation back-propagation algorithm Boltzmann machine Chapter components computational condition convergence correlation cost function covariance matrix data points defined denote derived described in Eq desired response differential differential entropy dimensionality discussed distribution dynamic programming dynamic system eigenvalue entropy equation error estimate example feedback FIGURE formula function f Gaussian gradient hidden layer hidden neurons Hopfield input space input vector iteration Kalman filter kernel kernel PCA Kullback-Leibler divergence learning algorithm learning-rate parameter least-squares linear LMS algorithm Lyapunov Markov chain method minimize multilayer perceptron mutual information neural network neuron nodes noise nonlinear optimal output layer pattern performance probability density function problem pX(x random variable random vector RBF network recurrent network regularization respect result Section self-organizing self-organizing map signal statistical stochastic supervised learning support vector machine synaptic weights theorem theory tion training sample update weight vector zero