Science of Artificial Neural Networks, Volume 1, Parts 1-2SPIE, 1992 - Neural networks (Computer science) |
From inside the book
Results 1-3 of 6
Page 382
... Lyapunov exponents of an unknown dynamical system is designed . The algorithm estimates not only the largest but all n - Lyapunov exponents of an n - dimensional system correctly . The estimation is carried out by multilayer feedforward ...
... Lyapunov exponents of an unknown dynamical system is designed . The algorithm estimates not only the largest but all n - Lyapunov exponents of an n - dimensional system correctly . The estimation is carried out by multilayer feedforward ...
Page 386
... Lyapunov exponents A1 , 2 and 3 are 1.50 , 0.0 and -22.5 , respectively and they satisfy the rule that λ1 + A2 + A3 ... Lyapunov exponents of the Henon map and the Lorenz attractor . TABLE I LYAPUNOV EXPONENT ESTIMATES OF THE HENON MAP ...
... Lyapunov exponents A1 , 2 and 3 are 1.50 , 0.0 and -22.5 , respectively and they satisfy the rule that λ1 + A2 + A3 ... Lyapunov exponents of the Henon map and the Lorenz attractor . TABLE I LYAPUNOV EXPONENT ESTIMATES OF THE HENON MAP ...
Page 388
... Lyapunov exponents with multilayer feedforward network learning ' , Department of Economics , University of Houston . [ 4 ] Eckmann , J.P. and D. Ruelle ( 1985 ) , ' Ergodic theory of strange attractors ' , Reviews of Modern Physics ...
... Lyapunov exponents with multilayer feedforward network learning ' , Department of Economics , University of Houston . [ 4 ] Eckmann , J.P. and D. Ruelle ( 1985 ) , ' Ergodic theory of strange attractors ' , Reviews of Modern Physics ...
Contents
Computational learning theory Plenary Paper 171003 | 3 |
R Raghavan National Univ of Singapore Singapore | 19 |
R Michaels Univ of TennesseeKnoxville | 32 |
Copyright | |
30 other sections not shown
Other editions - View all
Common terms and phrases
AANET activation function analysis applied approximation architecture artificial neural networks automata backpropagation behavior binary classification Computer configuration convergence correlation defined distal dynamics elements equations error example faults feature feedforward feedforward networks Figure global grammar hidden layer hidden nodes hidden units IEEE input matrix input vectors iterations learning algorithm learning potential function linear Lyapunov exponents mapping mathematical method minimization multilayer multilayer perceptron neural net neurons noise nonlinear null space number of hidden objective function optimal optimisation oscillators output layer output node parameters pattern recognition performance perturbation phase transition pixels principal component probability distribution problem propagation random receptive field receptors representation represents rule samples self-organizing sigmoid sigmoid functions signals simulated simulated annealing single-layer solution space statistical structure supervised learning target technique theory threshold tooth training patterns training set transformations two-layer perceptron variables vector visual weights