Feedforward Neural Network MethodologyThe decade prior to publication has seen an explosive growth in com- tational speed and memory and a rapid enrichment in our understa- ing of arti?cial neural networks. These two factors have cooperated to at last provide systems engineers and statisticians with a working, prac- cal, and successful ability to routinely make accurate complex, nonlinear models of such ill-understood phenomena as physical, economic, social, and information-based time series and signals and of the patterns h- den in high-dimensional data. The models are based closely on the data itself and require only little prior understanding of the stochastic mec- nisms underlying these phenomena. Among these models, the feedforward neural networks, also called multilayer perceptrons, have lent themselves to the design of the widest range of successful forecasters, pattern clas- ?ers, controllers, and sensors. In a number of problems in optical character recognition and medical diagnostics, such systems provide state-of-the-art performance and such performance is also expected in speech recognition applications. The successful application of feedforward neural networks to time series forecasting has been multiply demonstrated and quite visibly so in the formation of market funds in which investment decisions are based largely on neural network–based forecasts of performance. The purpose of this monograph, accomplished by exposing the meth- ology driving these developments, is to enable you to engage in these - plications and, by being brought to several research frontiers, to advance the methodology itself. |
Contents
RealValued Nodes | 4 |
Architecture Selection and Penalty Terms | 6 |
Perceptrons Networks with a Single Node | 17 |
1 | 33 |
1 | 52 |
Network elements | 55 |
0 | 113 |
The Issue | 203 |
A Note on Use as a Text | 285 |
309 | |
314 | |
319 | |
Other editions - View all
Common terms and phrases
applications architecture artificial neural networks assume Assumption asymptotic backpropagation Boolean function Chapter choice classification complexity components computational condition conjugate gradient continuous function convergence convex convex combination cross-validation dichotomies elements empirical error error function estimate evaluate feedforward finite given Hence Hessian hidden layer network hyperplane implemented input vectors iteration learning rate Lemma line search linear linear subspace linearly separable logistic LTU nodes MATLAB matrix metric minimize minimum network parameters neural networks neuron node functions nonlinear number of nodes optimal output node parameter vector perceptron points polynomials positive definite probability problem quadratic quasi-Newton real-valued representation s₁ sample search directions Section sigmoidal sigmoidal function single hidden layer space specified steepest descent Stone-Weierstrass Theorem subset Theorem threshold training algorithm training data training set upper bound Vapnik VC capacity VC dimension weight vector wwbest yields