Automated Explanations as a Component of a Computer-Aided Design System

Abstract
The ability to explain something, e.g., the operation of a complex machine or program, is an important, but poorly understood, component of intelligent behavior. We discuss an artificial intelligence approach to the modeling of the explanation process within the framework of a graphics-based CAD system currently under development, which can describe its own use, including the common ways to make and recover from errors. With a coordinated textual and pictorial display, the system, CADHELP, simulates an expert demonstrating the operation of the graphical features of the CAD subsystem. It consults a knowledge base of feature scripts to explain a feature, generate prompts as the feature is being operated, and to give certain types of help when a feature is misused. CADHELP provides these services by summarizing a feature script in different ways depending upon what it has told the user previously. The summarization process is based upon a theory of natural-language generation, in which a concept to be expressed is replaced in a short-term memory by words spanning part of its meaning interspersed with subconcepts still to be expressed. The generation process is mediated by a series of "sketchification" strategies, which prescribe which parts of a knowledge structure, a causal chain, or a single concept need not be expressed, since the listener should be able to infer them.

This publication has 7 references indexed in Scilit: