IEE International Conference on Artificial Neural NetworksThe Institution, 1989 - Neural networks (Computer science) |
From inside the book
Results 1-3 of 54
Page 314
... convergence on the generalisation task for several w . Net toplology and the 16 training patterns are shown ; sample ... convergence b a : a = 2.5 b : a = 4 C : α = 5 d : a = 10 e : α = 25 7 8 9 10 20 30 # cycles to convergence ( 000s ) ...
... convergence on the generalisation task for several w . Net toplology and the 16 training patterns are shown ; sample ... convergence b a : a = 2.5 b : a = 4 C : α = 5 d : a = 10 e : α = 25 7 8 9 10 20 30 # cycles to convergence ( 000s ) ...
Page 330
... convergence increases logarithmically with the size of the search space [ M ] ( 3 ) , Performance Degradation The performance of Stochastic Search Networks is affected by two main factors ; The Time Ratio ( Tr ) . This determines the ...
... convergence increases logarithmically with the size of the search space [ M ] ( 3 ) , Performance Degradation The performance of Stochastic Search Networks is affected by two main factors ; The Time Ratio ( Tr ) . This determines the ...
Page 392
... convergence rate for simulated annealing . The state generation probability distribu- tion function used in ... converge to a solution as good as gradient descent . Diffusion is faster at converging than gradient descent because it is ...
... convergence rate for simulated annealing . The state generation probability distribu- tion function used in ... converge to a solution as good as gradient descent . Diffusion is faster at converging than gradient descent because it is ...
Common terms and phrases
activation adaptive analog analysis applications approach architecture artificial neural network back-propagation binary bits capacitor cell circuit classification clusters coefficients computation connectionist connections convergence corresponding defined described distribution elements equation error example feature feed-forward filter generalisation given hardware hidden layer hidden units Hopfield IEEE implementation input pattern input vector iterations Kohonen learning algorithm linear mapping matrix memory method multi-layer perceptron n-tuple neural net nodes noise nonlinear number of hidden obtained operation optical flow optimal optimisation output neurons output unit parallel Parallel Distributed Processing parameters pattern recognition performance problem Proc processing processor propagation pulse quantisation Radial Basis Function represent response rule Rumelhart samples shown in Figure signal simulation SLLUP solution space speech speech recognition stochastic structure supervised learning synaptic target techniques threshold tion training data training set Transputer update variable VLSI voltage weights