Science of Artificial Neural Networks, Volume 1, Parts 1-2SPIE, 1992 - Neural networks (Computer science) |
From inside the book
Results 1-3 of 66
Page 193
... parameters are typical MOSIS parameters : N Channel Parameters VTO = 0.813889 KP = 4.544E - 05 GAMMA = 0.6306 PHI = 0.6 Step # 2 : ITN ( max ) Vss = = -IIN ( min ) 100 uA P Channel Parameters VTO = -0.871565 KP = 1.7177E - 05 GAMMA ...
... parameters are typical MOSIS parameters : N Channel Parameters VTO = 0.813889 KP = 4.544E - 05 GAMMA = 0.6306 PHI = 0.6 Step # 2 : ITN ( max ) Vss = = -IIN ( min ) 100 uA P Channel Parameters VTO = -0.871565 KP = 1.7177E - 05 GAMMA ...
Page 300
... parameters of neurons within each layer , and ( 3 ) training the network by updating neuron parameters to optimize performance on representative training data . In contrast to the third step , for which many strategies have been ...
... parameters of neurons within each layer , and ( 3 ) training the network by updating neuron parameters to optimize performance on representative training data . In contrast to the third step , for which many strategies have been ...
Page 407
... parameters in the network , it was not investigated how this weight sharing technique affects the capacity of the network . This is an important question since the key parameter in the framework of Vapnik is the capacity , and not the ...
... parameters in the network , it was not investigated how this weight sharing technique affects the capacity of the network . This is an important question since the key parameter in the framework of Vapnik is the capacity , and not the ...
Contents
Computational learning theory Plenary Paper 171003 | 3 |
Architectures | 19 |
R Michaels Univ of TennesseeKnoxville | 32 |
Copyright | |
28 other sections not shown
Other editions - View all
Common terms and phrases
AANET activation function analysis applied approximation architecture artificial neural networks automata backpropagation behavior binary classification Computer configuration convergence correlation defined distal dynamics elements equations error example faults feature feedforward networks Figure global grammar hidden layer hidden nodes hidden units IEEE input matrix input vectors iterations learning algorithm learning potential function linear Lyapunov exponents mapping mathematical method minimization multilayer multilayer perceptron neural net neurons noise nonlinear null space number of hidden objective function optimal optimisation oscillators output layer output node parameters pattern recognition performance perturbation phase transition pixels principal component probability distribution problem propagation receptive field receptors representation represents rule samples self-organizing sigmoid sigmoid functions signals simulated simulated annealing single-layer solution space statistical structure supervised learning target technique theory threshold tooth training patterns training set transformations two-layer perceptron variables vector visual weights XX XX