Principles of Neurocomputing for Science and EngineeringUnlike other neural network books, this is written specifically for scientists and engineers who want to apply neural networks to solve complex problems. For each neurocomputing concept, a solid mathematical foundation is presented along with illustrative examples to accompany that particular architecture and associated training algorithm. It incorporates many detailed examples and an extensive set of end-of-chapter problems. |
Contents
2 | 21 |
9 1 Scaling 2 9 2 Transformations 2 9 3 Fourier Transform | 84 |
Mapping Networks 96 2250 | 96 |
Copyright | |
13 other sections not shown
Common terms and phrases
activation function Adaline adaptive Applications of Neurocomputing Architectures and Learning artificial neural networks associated backpropagation Boltzmann machine CHAPTER computed Concepts and Selected constraints convergence covariance matrix defined derived discrete-time eigenvalue eigenvectors energy function error cost function estimate example factor feedforward Fundamental Neurocomputing Concepts Hebbian Hebbian learning Hessian matrix hidden layer Hopfield network input patterns input vector iterations learning algorithm learning rate parameter learning rule least-squares linear combiner LMS algorithm LP problem MATLAB memory method Minimize MLP NN Neural Network Architectures neurocomputing approach neuron nonlinear optimization problem orthogonal output layer perceptron perform PLSR pseudoinverse RBF NN regression robust scalar form Schur decomposition Sect Selected Neural Network shown in Figure Signal Processing simulated annealing solution solving steepest descent Step structured neural network supervised learning symmetric synaptic weights training epochs values vector-matrix form w₁ weight matrix weight vector zero Σ Σ