Advanced algorithms for neural networks: a C++ sourcebook, Volume 1
A valuable working resource for anyone who uses neural networks to solve real-world problems This practical guide contains a wide variety of state-of-the-art algorithms that are useful in the design and implementation of neural networks. All algorithms are presented on both an intuitive and a theoretical level, with complete source code provided on an accompanying disk. Several training algorithms for multiple-layer feedforward networks (MLFN) are featured. The probabilistic neural network is extended to allow separate sigmas for each variable, and even separate sigma vectors for each class. The generalized regression neural network is similarly extended, and a fast second-order training algorithm for all of these models is provided. The book also discusses the recently developed Gram-Charlier neural network and provides important information on its strengths and weaknesses. Readers are shown several proven methods for reducing the dimensionality of the input data. Advanced Algorithms for Neural Networks also covers: Advanced multiple-sigma PNN and GRNN training, including conjugate-gradient optimization based on cross validation The Levenberg-Marquardt training algorithm for multiple-layer feedforward networks Advanced stochastic optimization, including Cauchy simulated annealing and stochastic smoothing Data reduction and orthogonalization via principal components and discriminant functions Economical yet powerful validation techniques, including the jackknife, the bootstrap, and cross validation Includes a complete state-of-the-art PNN/GRNN program, with both source and executable code
What people are saying - Write a review
We haven't found any reviews in the usual places.
Hybrid Training Algorithms
8 other sections not shown
Bayesian probabilities bestfval bestnet bias bootstrap bootstrap sample chapter classification coefficients compute CONFUSION ROW conjugate gradient algorithm convergence criterion current point defined density function deterministic algorithm diagonal diff distribution double estimate evaluations evect excess error factor fbest Figure function value fval GCNN global minimum Gram-Charlier Hermite polynomials Hessian Hessian matrix hidden layer improvement interval iteration jackknife kernel line minimization linear linear congruential loop malloc mathematics matrix mean method multivariate neterr NETWORK MODEL NETWORK WITH INPUT neural network neuron Newton's method nhidl normal density nvars optimization parameter partial derivatives perturbation population principal components probabilistic neural network probability problem random number random variable readers RUN NETWORK scale search direction second derivatives shown in Equation sigma simulated annealing smoothed function standard deviation statistical step stochastic subroutine temp temperature test2.dat user_quit user's validation set var++ variance vector weight xbest zero