A Unified Theory of Inference for Text Understanding
Computer Science Division, University of California, 1987 - Natural language processing (Computer science) - 161 pages
Natural languages, such as English, are difficult to understand not only because of the variety of forms that can be expressed, but also because of what is not explicity expressed. The problem of deciding what was implied by a text, of 'reading between the lines' is the problem of inference. For a reader to extract the proper set of inferences from a text (the set that was intended by the text's author) requires a great deal of general knowledge on the part of the reader, as well as a capability to reason with this knowledge. When the 'reader' is a computer program, it becomes very difficult to represent this knowledge so that it will be accessible when needed. Past approaches to the problem of inference have often concentrated on a particular type of knowledge structure (such as a script) and postulated an algorithm tuned to process just that type of structure. The problem with this approach is that it is difficult to modify the algorithm when it comes time to add a new type of knowledge structure. An alternative, unified approach is proposed. This approach is formalized in a computer program named FAUSTUS. The algorithm recognizes six very general classes of inference, classes that are not dependent on individual knowledge structures. Rather, the classes describe general kinds of connections between concepts. New kinds of knowledge can be added without modifying the algorithm. Thus, the complexity has been shifted from the algorithm to the knowledge base. To accommodate this, a powerful knowledge representation language named KODIAK is employed.
NETL: A System for Representing and Using Real-world Knowledge
Scott E. Fahlman
No preview available - 2003