Ian Cloete, Jacek M. Zurada
MIT Press, 2000 - Computers - 486 pages
Looking at ways to encode prior knowledge and to extract, refine, and revise knowledge within a neurocomputing system.Neurocomputing methods are loosely based on a model of the brain as a network of simple interconnected processing elements corresponding to neurons. These methods derive their power from the collective processing of artificial neurons, the chief advantage being that such systems can learn and adapt to a changing environment. In knowledge-based neurocomputing, the emphasis is on the use and representation of knowledge about an application. Explicit modeling of the knowledge represented by such a system remains a major research topic. The reason is that humans find it difficult to interpret the numeric representation of a neural network.The key assumption of knowledge-based neurocomputing is that knowledge is obtainable from, or can be represented by, a neurocomputing system in a form that humans can understand. That is, the knowledge embedded in the neurocomputing system can also be represented in a symbolic or well-structured form, such as Boolean functions, automata, rules, or other familiar ways. The focus of knowledge-based computing is on methods to encode prior knowledge and to extract, refine, and revise knowledge within a neurocomputing system.ContributorsC. Aldrich, J. Cervenka, I. Cloete, R.A. Cozzio, R. Drossu, J. Fletcher, C.L. Giles, F.S. Gouws, M. Hilario, M. Ishikawa, A. Lozowski, Z. Obradovic, C.W. Omlin, M. Riedmiller, P. Romero, G.P.J. Schmitz, J. Sima, A. Sperduti, M. Spott, J. Weisbrod, J.M. Zurada
What people are saying - Write a review
We haven't found any reviews in the usual places.
Other editions - View all
accuracy ANN-DT(e application approach approximation approximation errors Artificial Intelligence artificial neural networks attribute automata backpropagation Basis Function Network BP learning chapter classification complex connection weights connectionist Copyright Acknowledgments corresponding data set decision boundary decision tree derivatives differential equations distributed domain theory encoding error example EXPSYS feedforward finite-state fuzzy relation FYNESSE Giles graph hidden layer hidden units hybrid implemented inference inference engine initial input layer integration International knowledge representation knowledge-based neurocomputing learning algorithm Machine Learning mapping Modified Logistic Network module network architecture network structure neural expert system neural network model neurons node nonlinear number of hidden Obradovic Omlin optimal output layer output neuron output unit parameters patterns performance prediction prior knowledge problem pruning Radial Basis Function recurrent networks recurrent neural networks represent rule extraction samples Section Shavlik sigmoid function sigmoidal strings target task techniques Touretzky training data training set transformation variables vector
Page 2 - For each possible percept sequence, an ideal rational agent should do whatever action is expected to maximize its performance measure, on the basis of the evidence provided by the percept sequence and whatever built-in knowledge the agent has.
Page 113 - EECE 92-002, Department of Electrical and Computer Engineering, University of New Mexico, Albuquerque, NM, 87131.
Page 91 - We then tested the stability of the fuzzy internal state representation on 100 randomly generated strings of length 100 by comparing, at each time step, the output signal of each recurrent state neuron with its ideal output signal. Since each recurrent state neuron S¿ corresponds to a FFA state g¿, we know the degree to which
Page 2 - below: Artificial intelligence is the study of the computations that make it possible to perceive, reason and act. (Winston, 1992,
Page 92 - unstable. For H > 9.75, the internal FFA state representation becomes stable. This discontinuous change can be explained by observing that there exists a critical value Ho(r) such that the number of stable fixed points also changes discontinuously from one to two for H < Ho(r) and H
Page 92 - and H — 9.75: The existence of significant neuron output errors for H = 9.7 suggests that the internal FFA representation is unstable. For H > 9.75, the internal FFA state representation becomes stable. This discontinuous change can be explained by observing that there exists a critical value
Page 98 - a search tree with the initial state as its root and the number of successors of each node equal to the number of symbols in the input alphabet. Links between nodes correspond to transitions between DFA states. The search is performed in breadth-first order.
Page 85 - DFAs, a set of FFA states can be occupied to varying degrees at any point in time; this fuzzification of states generally reduces the size of the model, and the dynamics of the system being modeled is often more accessible to a direct interpretation.
Page 93 - Stability of FFA state encoding: The histogram shows the absolute neuron output error of a network with 100 neurons that implements a randomly generated FFA, and reads 100 randomly generated strings of length 100 for different values of the weight strength H. The