Principles of Neurocomputing for Science and Engineering
This exciting new text covers artificial neural networks, but more specifically, neurocomputing. Neurocomputing is concerned with processing information, which involves a learning process within an artificial neural network architecture. This neural architecture responds to inputs according to a defined learning rule and then the trained network can be used to perform certain tasks depending on the application. Neurocomputing can play an important role in solving certain problems such as pattern recognition, optimization, event classification, control and identification of nonlinear systems, and statistical analysis."Principles of Neurocomputing for Science and Engineering," unlike other neural networks texts, is written specifically for scientists and engineers who want to apply neural networks to solve complex problems. For each neurocomputing concept, a solid mathematical foundation is presented along with illustrative examples to accompany that particular architecture and associated training algorithm.The book is primarily intended for graduate-level neural networks courses, but in some instances may be used at the undergraduate level. The book includes many detailed examples and an extensive set of end-of-chapter problems.
21 pages matching Hessian matrix in this book
Results 1-3 of 21
What people are saying - Write a review
We haven't found any reviews in the usual places.
Introduction to Neurocomputing
Fundamental Neurocomputing Concepts
9 7 Scaling 2 9 2 Transformations 2 9 3 Fourier Transform
14 other sections not shown
activation function Adaline adaptive Applications of Neurocomputing Architectures and Learning artificial neural networks associated backpropagation binary Boltzmann machine CHAPTER components computed Concepts and Selected constraints convergence covariance matrix defined derived discrete-time eigenvalues eigenvectors elements energy function error cost function estimate example factor Fundamental Neurocomputing Concepts given Hebbian Hebbian learning Hessian matrix hidden layer Hopfield network input patterns input vector iterations Kohonen learning algorithm learning rate parameter learning rule least-squares linear combiner LMS algorithm loading vectors LP problem MATLAB memory method Minimize MLP NN Neural Network Architectures neurocomputing approach neuron nonlinear Optimization Problems orthogonal output layer perceptron perform PLSR presented pseudoinverse random RBF NN regression scalar form Schur decomposition Sect Selected Neural Network Self-Organizing shown in Figure Signal Processing simulated annealing solution solving steepest descent Step stochastic supervised learning symmetric synaptic weights tion training epochs values variable weight matrix weight vector zero