You are on page 1of 6

TP5 - 2:30 DOMAINS OF ARTIFICIAL INTELLIGENCE

RELEVANT TO SYSTEMS
J. Douglas Birdwell Department of Electrical Engineering University of Tennessee Knoxville, TN 37996-2100
J. Robin B. Cockett Department of Computer Science University of Tennessee Knoxville, TN 37996-1301

John R. Gabriel MCSD-221 Argonne National Laboratory 9700 South C0s Ave. Argonne, IL 60439

ABSTRACT
This paper summarizes the authors' views of the areas where techniques from artificial intelligence may prove applicable to systems problems. For this paper, we define "artificial intelligence" as the application of symbolic reasoning on stored knowledge. The concept of knowledge is left imprecise; knowledge may be approximate and symbolic rather than exact and numeric. The components of the field of artificial intelligence which we consider most relevant to the systems community are expert systems, knowledge representation, knowledge base query and inference, data base design and access, and mixedlanguage programming. We wil discuss candidate areas for research in the application of artificial intelligence from two perspectives: current problems in artificial intelligence the solution of which would positively affect system applications, and areas within systems which we feel would benefit from the application of artificial intelligence techniques.

century. That term covered a multitude of sins. Until disciplines such as chemiistry, physics and mathematics, which we now recognize as distinct, emerged, they were not distinguished. In the same way we begin to see the emergence of distinct disciplines from Al. The question is whether they have emerged sufficiently to warrant, at this date, a distinct nomenclature, allowing the term Al, itself, to become redundant. Our contention is that this has not yet happened, as separate methodologies have yet to clearly emerge.

EXPERT SYSTEMS
To illustrate problem areas which are hard to classify consider the problems involved in building expert systems. Within the systems community there has been an increasing effort to exploit this so called "expert system" technology. One way of viewing the aim of an expert system is to provide a model of the human expert (aided by his chosen tools) in his domain of knowledge. This may appear to be the cognitive modeling problem alluded to above. However there is a significant difference in that, almost by definition, an expert does not behave pathologically. Rather, we expect rational and explainable behavior throughout. This we believe places the problem closer to the area of symbolic modeling than to cognitive modeling. It has often been said, in the expert systems literature, that these systems solve problems "for which there are no explicit algorithms." Yet it is proclaimed that there must be an expert present who can perform the task. Clearly, the skeptic might argue, the expert has an, albeit not explicit, algorithm, so why not simply employ a software engineer to extract the algorithm and produce a program?

INTRODUCTION
Two diametrically opposed approaches to the subject matter of artificial intelligence (Al) exist. So different are these approaches that at the outset we feel it important to state where our sentiments lie. The first approach is primarily concerned with modeling human behavior (with all its quirks). A tendency of this school is to study unexpected or pathological software behavior in relation to human behavior in a sinilar context. This, we believe, is a justifiable approach in the cognitive sciences; however, for software engineers concerned with producing technological tools, this approach has significant shortcomings. The second approach is to view Al as the application of symbolic reasoning. This view follows the work on theorem proving, automated deduction and other related areas, and is the view that is espoused in this paper.

justifiable to call such activities Al. We propose two interconnected reasons: There are problem areas which are hard to classify in any other way, and the terminology is historical. The usage of the term "Artificial Intelligence" today is similar to the usage of the term "Natural Philosophy" in the 18th

A Definition of Artificial Intelligence Clearly this approach has solid mathematical foundations in logic and discrete mathematics. Arguments exist that this is therefore not AI, but rather (applied) automated deduction, theorem proving or algorithm design. We believe that it is

form may be at odds with the procedures he understands, and may seem, on rhe surface, to be unreasonable. What does he do? If the machine can explain why it recommends the action, then the action it gecommends is more likely to be understood and taken. This is not the oi ly class of problems in which explanations play an important rote. Consider the development of a model of a plant. The objective is to produce a model which approximates the behavior of the plant. Suppose that after months of

The Role of Explanations The answer is that the problem is not so simple. Consider the problem of controlling a nuclear power station: An algorithm may c rrectly conclude that in order to avoid melt-down some specific action must be taken. The action is performed, however, by a human operator. The action he is requested to per-

1153

intricate modeling it is observed that in some areas the model behaves significantly differently from the actual plant. Would it not be useful if the model could explain what components or parameters were critical to that behavior? The problem with theoretical models of all descriptions is that they can be wrong. To ease the pain of modifying models to conform to reality, a further level of understanding is required which speaks to the causes of behavior (8). Explanations of behavior are necessary. Indeed, one may view the development of a model as an interaction between reality and the model, in which the explanations act as a guide toward modification. The extraction of explanations from complex systems is non-trivial. Expert systems and AI have the potential to achieve this level of modeling. This is undoubtedly one of the reasons for the popularity of this technology.

MODELING
Hiding inside any expert system is a decision model of the application of expertise; this may be rule-based, frame-based, decision expression-based, or whatever. This model is used by an inference engine to drive the interaction between the system and the user. This can be viewed as a method of determining the next state given the current state and the users input. The decision model is a dynamic model in the sense normally used in the systems community; however, there is little similarity between it and traditional system theoretic modeling approaches. Decision models specify dynamic transitions conditioned by context. An intriguing property of these models is the fact that they are not efficiently representable using the standard tools-of systems theory. It may be possible to unify techniques from both areas; if realized, this would benefit both communities. The wellunderstood models from systems theory could lead to richer and more formal techniques in AT, particularly in the area of stochastic modeling. Conversely, construction techniques for discrete state systems could benefit from the importation of techniques from AI. Our premise is that both fields can benefit from an examinetion of the other's methods. Systems theoretic models can precisely represent stochastic behavior on homogeneous algebraic spaces (for example, on groups, semigroups, and finite dimensional vector spaces over real or complex fields). Al models provide techniques for the representation of non-homogeneous systems, which require the patching together of many conditionally defined functions (for example from rules of the form "in this situation the system behaves like this"). These non-homogeneous systems introduce two problems. First, they are often difficult to describe succinctly. If the representation is not transparent, they can produce unexpectedly intractable model verification problems. Second, the efficiency of computing the "next state" must be considered. It is possible that some information contained in this state can be-calculated independently from much of the information contained in the current state. Exploiting this in more standarct systems theoretic models is difficult.

efficiency with which the model can be manipulated within the computer. The representation can make an otherwise incomprehensible model understandable (or at least explainable). It should be obvious that the more understandable a model is, the more likely it is to be correct. Furthermore when the system fails, this human window assumes a new importance in the tracing of cause and effect. Several representation techniques from AI appear to be applicable. A logical representation using minimal disjuncts (prime rules) is one candidate. A second approach is the use of the concept of inheritance, as in hierarchical frame structures (13) and object-oriented programming (4,20), to provide succinct descriptions. The comprehensibility of the representation is increased by graphical presentations of these structures. The ultimate goal is that representations of systems should contain sufficient information for the automatic generation of explanations of their behavior. Backward chaining rule-based systems are well-known for this ability. They are also rather inefficient to evaluate. The ability to have efficient code without sacrificing the explanation capabilities is highly desirable (5).

state and input spaces. AI models can be placed in this setting; however, the emphasis is on how the state transition map can be evaluated. For example, how can the computer efficiently compute the set of states and inputs which can transit to a given state at a future time? Or, what are the possible future states for a given current state? When the state transition map is an analytic function of the state and input variables, these questions are easily addressed; however, if the state transition map is explicitly enumerated (assuming this is possible), one quickly exceeds available computer resources. AI models normally operate over a discrete state space and a discrete dnput space. Members of these spaces are structures composed of symbols. The user and the computer have agreed apriori to attach meaning to these symbols. The state transition map represents the relations between elements of the knowledge base, or state space, and the acquisition of new information (inputs). This map can be viewed as a graph. The graph has structure; however, it is local. The objective of techniques used to construct AI models is to exploit this local structure, and to avoid the enumeration of all edges in the graph. We present the following definitions in an effort to formalize these concepts for the purpose of discussion, and to illustrate the connections between systems theoretic models and Al models.

System and AI Models The fundamental difference between models, as defined in systems theory, and as defined in AI, is the emphasis placed on global properties, versus implementation properties. A systems model is defined by a state transition map and an output map. The state transition map defines the dynamic behavior, and the output map defines the (memoryless) observable behavior. These maps are assumed to be well-defined functions on the

Representation and Explanation The importance of representation cannot be overstated: It pro-

Definition 1 A non-homogeneous space is a set of values, V, and a set of pairs, (F,Vi), where Fi is an operator from Vi to V, and V, is a subset of TJt l V. Typically, Vi is a proper subset.
An algebra is a non-homogeneous space where V1=

vides the human window into the model. It also determines the

% V.

1154

Definition 2 A non-homogeneous system is a set of pairs, where {(f,hi)},ie1, space I is a finite index set, a state space V, and an input U. Defining the sets

Si

{(v,u) E Vx Uth(v,u) = 1},

then the following conditions must be satisfied:


hi: V x U- {O, 1}, * fi Si -- V, and * the set8 Si are disjoint, and define a cover of V x U. In the terminology of AI, the functions fi are production rules, and the functions hi define the context or premise under which the production rule may be applied. When it is applicable, the function fi defines the state transition map. The non-homogeneous system defines a non-homogeneous space Given these definitions, several observations are apparent. First, the functions hj define a partition on the space V x U. Given the third condition, the state transition map is uniquely defined. However, this condition is often removed in AI applications. If the sets Si are disjoint, then by defining the state transition map to be the identity on the set of elements of V x U for which no production rule is applicable, the definition of a non-homogeneous system fits within the context of discrete state systems. Further, nothing within the definition restricts the state or input space to be of finite cardinality; thus, it may not be possible to enumerate all instances of the state transition map. If the sets Si are neither disjoint nor form a cover of V x U, then the non-homogeneous system can be used to describe forward- or backward-chaining inference engines. Elements of V x U outside UiEI Si correspond to stopping conditions for a chain of reasoning. For given i and j where i $A j, elements of Si n S; correspond to elements of the knowledge base fromh which multiple reasoning chains may exist. The decomposition provided by the above definitions may allow the individual functions fi to be evaluated in a reasonably compact manner. A transition has two stages: First, the set of functions f, which are applicable is determined. Second, one function from that set is applied. The first stage determines the current behavior mode of the system. These stages lead to a basis for the explanation of model behavior; one explanation is a description of why the context was satisfied for the last used production rule. A different approach to explanations is to encode the context decision in an efficient decision expression, and to extract the explanation from this code when it is needed (5,21). We emphasize that this approach is a preliminary attempt to provide a degree of mathematical rigor to production rule systems (9) in AT; however, we believe the ap proach is promising since expert systens exist of a reasonable complexity using production rules which are applicable in a specified context.

{V X U,{(Si,fi)}

effect can be the introduction of inconsistencies into the design; the removal of inconsistencies requires additional financial resources and can adversely affect the performance of the product. Effective use of computer data baes, which are designed for technical information management, could significantly improve this situation, and effective use of knowledge representation and automated reasoning may provide mechanisms for automated consistency checks on design progress. Current data base systems are oriented toward non-technical users; although much of the technology required to maintain relationships among technical information packets is mature, the interfaces required to define and modify technical packets are scarce. This situation is similar to that encountered in computer-aided design and manufacturing systems; however, these systems are further developed because of the large level of effort assigned to CAD/CAM by hardware manufacturers. Similar capabilities are needed in the more general setting of systems design.

Control System Design Phases The design phase of a control system can be decomposed into four subproblems: 1. Modeling the plant which is to be controlled, 2. Determining the design requirements,
3. Designing the controller, and

4. Evaluating the plant and controller design with respect to the design requirements.

SYSTEM INTERFACES
A system design for a large plant, such as a nuclear power plant, is the result of the effort of many engineers in a large organization. The size of the organization, coupled with the ineffectual use of computer resources, can introduce delays and errors in the information flows between engineering design groups. The

Each of these stages presents a serious challenge to the designer. Specifying a model of the plant is often a major task. Although this task may require models of components with poorly understood behavior, this is not the major difficulty. Given exact component models, the reduction of a graph of interc-onnected components to a plant model is not difficult. However, component models are normally under the jurisdiction of different engineering groups, which work from (perhaps slightly outdated) specifications of interfaces between component subsystems, and make implicit (and often unrecorded) assumptions of behavior of subsystems designed by others. Severe problems exist in current engineering organizations due to time delays in the transfer of information between design groups, to errors in the transmission of data, and to omission of information which, if available, would affect the design of other subsystems. The nature of existing problems is centered around the use of incorrect, missing, or outdated information, and around the inconsistencies which arise in the design due to assumptions based upon such information. One may view the engineering design process as starting with a very open, ill-specified, problem and massaging it into a form in which it is arnenable to an available methodology from which the component model may be obtained. During the modeling and design process, the engineer needs to evaluate the design in realistic situations. There are difficult problems concerning the selection and use of appropriate methodologies for this evaluation. The problems concerned with the interface between algorithms which implement mathematical results and the engineer are greatly neglected and are generally under estimated. Al can make significant contributions toward the use of mathematical software.

1155

Access to Software Tools There are some rather practical reasons for the interest in expert systems technology. To provide explanations to a user, a system must have fairly sophisticated communication capabilities (natural language or graphics); the effect of this has been the production of relatively user-friendly interfaces. Userfriendliness may appear to be a gimmick; however, financially, it is certainly not. It has the important effect of reducing training times and the time between receipt of software and its effective use. Particularly in the systems area, the problem is often not the lack of algorithms but the lack of high level access to those algorithms. Of the three hundred random number generators, which one should be used in this application? Which of the twenty different routines for multiplying two matrices together should be used? Has that calculation already been done? And if so which of the fifty files, created so far, contains the results? It is a well-kn.own phenomenon that as a system grows its usability often starts to decline. Often the designer is caught in this dilemma of scale: to create a system which is coherent and easy to use (but probably will be labelled as a "toy" by experts) or to creating a monster (which only a dedicated expert would dream of using). In this area Al interface techniques can play the role of facilitator, and manage the interface between software packages and the user. Overcoming some of these problems of scale is a matter of adequate file management and the generation of advice on the appropriate use of software and on the overall problem solution strategy.

at the cost of using inefficient algorithms and generic implementations, and building in inflexibility. It is clear, however, that it is only the rare person who can span all the details (or even a large part) involved in the whole design process. Thus, there is a growing need for the computer systems to know the details and to be able to tell users how the details should or might be applied to specific problems. There are currently many approaches to these problems. Many commercial systems have started with one extreme, the "command interpreter", and have incrementally improved the environment with interactive help and more sophisticated macros. Some hybrid environments are being developed in which an expert is, metaphorically, looking over the shoulder of a user (2,11). At the other extreme is the 'expert consultant" environment. The advantage to the latter environment is that a relative novice can quickly perform designs. The details will be managed. The disadvantage is that a particular approach to the problem will be taken. Thus, the user must trust the consultant, as it may not be the one the user had in mind. Furthermore, it may be difficult to force the system to change its approach. In the command interpreter environment, at least the user is in command.

REAL TIME SYSTEMS


The use of Al in real time is still waiting for sufficiently fast processing technology. It is fairly clear that there are appropriate uses for Al in real time, at least according to our definition. Many applications require the representation of symbolic knowledge, and its manipulation in an abstract form. An example is the implementation of decision aids for commanders in C5 systems. Here, the ad-hoc description of symbolic knowledge such as the set of possible strategies of enemy forces as, for example, integers, obscures the information at best. Such a representation yields no clue to the manipulation or use of that knowledge, and in this application it is vital that explanation facilities exist. Many of the difficulties associated with real time Al hinge upon the efficiency of the AI system. We point out that current representation schemes for symbolic knowledge and its use are primitive. A word of caution is in order; perhaps if a significant share of the money which is committed to the development of AT supercomputers were allocated to the development of a sound theoretical basic for knowledge representation, supercomputers would be unnecessay. An example of the difficulties involved is the ad-hoc use of metarules to control the application of rules in forward- or backward-chaining inference engines. A second area in need of significant research and development before real time Al becomes a reality is the interface between symbolic computation and the rest of the world. This interface has been neglected until quite recently by the AI community; however, it is critical if a real time system is to gather information about and send control actions to the process it is supposed to influence. A large block of this area concerns the implementation of symbolic processing environments.

Software Packages For control systems generated for a fairly constrained environment it would seem reasonable that comprehensive packages should be available. Examples include simulation, analysis, and design packages such as ACSL (14), CASCADE (3), CLADP (6,12), IDPAC (1,23), MATLAB (15), MATRIXX (22), SIMNON (1,7), and SSDP (19). The problem, of course, is that these packages are only available for areas which can support the development costs involved. There are still no systems which learn about or adapt to their working environment by asking appropriate questions and by observing the way in which they are used. The programmer who is an expert on the package is still essential. The problem with the current state of packages is not that they do not contain sufficiently potent algorithms. Rather the sheer quantity of available routines is often the problem (how many different ways are there to generate a random number or multiply two matrices?). As a result often only an expert can wield the tools and be sure of obtaining the best results. Furthermore, it is often very difficult to learn to become an expert from using the package alone. Can one rediscover the loop transfer recovery method of controller design from descriptions of the algorithms in the package? As the basis of numerical routines improves, it becomes possible to hide "unnecessary" complications from the user. This has been done to some extent in the differential equation solver ODEPACK (10) and in the DELIGHT (16,17) optimization packages. Hiding details allows a more abstract view of the problem. This has an inherent danger: namely, that the expertise at the less abstract levels will disappear. A good system should not have "black boxes" which cannot be opened. Further, climbing to higher levels of abstraction should not be

IMPLEMENTATION ISSUES
The use of numeric information to influence the decision process is required for the effective use of artificial intelligence in

1156

any of the areas described in this paper. Unfortunately, most of the symbolic programming environments which are available do not implement an effective interface between the symbolic computation world and the world of numeric algorithms. There are two reasons for this. First, the fundamental problem is difficult; the support environments required by, for example, Prolog and Fortran 77, are vastly different and are often incompatible. Second, the majority of the research done in Al has been unconcerned by the lack of quality numerics. Incompatibilities exist because of numerous factors. First, the symbolic environment must control the address space it is using, since operations such as forward and backward chaining generate large traces of their progress, which must be reclaimed upon completion; problems surface when the numeric environment exercise control over memory resources. Second, tag information is often embedded within objects in a symbolic environment to increase the efficiency of symbolic operations. Tags are bits reserved in the representation of data objects which identify the object to be a member of a given class. The need for tag information creates a trade-off between the standard representation of numeric and character data (relative to procedural languages) and the use of bits for tags. Third, little attention has been paid by the AI community to issues such as floating point representation and computation standards; indeed, floating point arithmetic in symbolic environments has the same status as floating point arithmetic in general purpose computers did twenty years ago. These issues, and others, relate to the direct linkage of symbolic and numeric code. A second option is to link symbolic and numeric environments at the task level, with well-defined task-to-task communications protocols. This has been done; in fact, most expert systems developed in the systems community use some variation of this technique. The disadvantage is efficiency, as well as the effect of varying numeric representations and conversions between representations on accuracy. The degradation in performance which results may or may not be acceptable in a given application. A third option is to implement all code in the symbolic processing language. This is not really an option: Consider the body of existing code which is used in the system field, which has been tested by a large user population on a wide spectrum of problems. Further, this option does not address the poor quality of the floating point implementations on most symbolic environments. A primary aspect of systems applications of AI is the use of numeric data. The large base of algorithms implemented in procedural languages (mostly Fortran) requires that the symbolic environment provide a sophisticated interface to these languages. This is currently a very weak aspect of symbolic environments. In real time systems, implementations using multiple tasks and task-to-task communications are probably not acceptable.

tems by the two fields. Since this research should be applied to specific problems, rather than theoretical, we have suggested areas where we believe extremely difficult research problems exist, and where solutions can have a major impact. Examples of open problems include the melding of computers into engineering working environments which encompass a large number of engineers, and the automated implementation of real time systems. The fist problem is only partially addressed by the availability of engineering workstations, software design tools, and access methods for distributed technical database systems. A deeper question is the effect which sophisticated computing resources should have on the engineering organization. Rather than ask how computers can be used to assist existing engineering organizations, perhaps the question should be: What should be the engineer's working environment using foreseeable computer tools (18)? Significant progress has been made in addressing an individual engineer's needs for powerful personal computing; however, the integration of distributed computation, centralized supercomputers, and the engineering organization has not been achieved. The second area of difficulty is the automated implementation of system designs. Except in specific laboratory environments, implementation is done by intuitive engineering judgment, rather than by the application of formalized methods, automated or not. In some discussions, this has been referred to as the "automated roll-out" of engineering designs. The influence of AI technology on the implementation of real-time systems (embedded applications) could be substantial, not to mention the possible use of real-time AI in the applications. In order to address the issues of automated implementation, models must first be constructed of the implementation process. It is in this area where AI is likely to make substantial contributions. The difficulties associated with model construction are discussed in greater detail below.

REFERENCES
(1) Astrom, K. J., "Computer-aided modeling, analysis and design of control systems-a perspective," IEEE Control System. Magazine, vol. 3, no. 1,
pp. 4-16, 1983.

(2) Astrom, K. J. and J. E. Larson, "An expert sys-

tem interface for IDPAC," IEEE Control Systems Society 1 Symposium on Computer-Aided Control System Design, Santa Barbara, CA, March, 1985.

(3) Birdwell J. D., M. Athans, S. A. Bly, J. R. B.

SUMMARY
We have attempted to summarize our views of the contributions which the fields of systems and artificial intelligence have to offer each other. We found it necessary to define, partially by example, what we mean by "artificial intelligence". There are several very interesting areas for research by people skilled in both yiItems and AI. Much of what we have proposed revolves around the different approaches taken to model physical sys-

Cockett, M. T. Heath, R. W. Heller, C. J. Herget, A. J. Laub, R. W. Rochelle, 3. P. Stovall, and R. Strunce, Isasues in the Design of a ComputerAided System and Contro Analysis and Design Environment. ORNL/TM-9038, Oak Ridge National Laboratory, Oak Ridge, TN, Aug., 1984.

(4) Bobrow, D. G. and M. Stetek, The LOOPS Man-

ual. Xerox Corporation, Palo Alto, CA. December, 1983.

1157

(5) Cockett, J. R. B., Decision Expression Optimization. Department of Computer Science Report CS-85-59, University of Tennessee, Knoxville, TN, April, 1985.

(6) Edmunds, J. M., "Cambridge linear analysis and design programs," IFAC Symposium on Computer Aided Design of Control Systems, Zurich, Switzerland, pp. 253-258, 1979.

Decis. and Cntl., Fort Lauderdale, FL, December, 1985. (19) Spang, H. A. IH, State Space Design Program (SSDP)-Reference Manual. GE Internal Report, Schenectady, NY, 1984. (20) Goldberg, A. and D. Robson, SmallTalk-80: The Language and its Implementation. AddisonWesley, Reading, MA. 1983.

(7) Elmquist, H., SIMNON, an Interactive Simulation Program for Nonlinear Systems. Department of Automatic Control, Lund Institute of Technology, Report 7502, Lund, Sweden, 1975. (8) Hayes-Roth, F., D. A. Waterman, and D. B. Lenat, (eds.) Building Expert Systems. p 28. Addison-Wesley, Reading, MA. 1983.

(21) Swartout, W. R., Producing Explainations and Justifications of Expert Consulting Programs, MIT
Laboratory for Computer Science Report LCS-

TR-251, Cambridge, MA, 1981.

(9) Hayes-Roth, F., D. A. Waterman, and D. B. Lenat, "Principles of pattern-directed inference systems," in D. A. Waterman and F. HayesRoth (eds.), Pattern-directed Inference Systems, pp. 577-601. Academic Press, New York, 1978.
(10) Hindmarsh, A. C., "Large ordinary differential equation systems and software," IEEE Control Systems Magazine, vol. 2, no. 4, pp. 24-30, 1982.

(22) Walker, R., C. Gregory, Jr., and "MATRIX.: a data analysis, system tion, control design and simulation IEEE Control Systems Magazine, vol. pp. 30-37, 1982.

identificapackage,' 2, no. 4,

S. Shah,

(23) Wieslander, J. IDPAC User's Guide, Revision 1. Department of Automatic Control, Lund Institute of Technology, Rept. 7605, Lund, Sweden, 1979.

(11) Larsson, J. E. and P. Persson, "Knowledge repre-

sentation by scripts in an expert interface", Proc. 1986 American Control Conference, Seattle, WA, June, 1986.

(12) Maciejowski, J. M. and A. G. J. MacFarlane, "CLADP: the cambridge linear analysis and design programs," IEEE Control Systems Magazine, vol. 2, no. 4, pp. 3-8, 1982.

(13) Minsky, M., "A framework for representing knowledge," in P. Winston (ed.), The Psychology of Computer Vision. McGraw-Hill, New York.
1975.

(14) Mitchell and Gauthier, Assoc., Inc., Advanced Continuous Simulation Language (ACSL). U8er Guide/Reference Manual. Concord, MA. 1981.

(15) Moler, C., MATLAB Users' Guide. Department

of Computer Science, University of New Mexico, 1981.

(16) Nye, W. T., E. Polak, A. L. Sangiovanni-Vincentelli. and A. L. Tits, "DELIGHT: an optimizationbased computer-aided design system," Proc. IEEE
Int'l Symp. on Circuits and Systems, Chicago, IL, April, 1981.

(17) Polak, E., P. Siegel, T. Wuu, W. T. Nye, and D. Q. Mayne, 'DELIGHT.MIMO: an interactive,
zinc, vol. 2, no. 4, pp. 9-14, 1982.

optimization-based multivariable control system design package," IEEE Control Systemsn Maga-

(18) Shaiken, H., 'The human impact of automation," Plenary Session I (no paper), IEEE 24& Conf. on
1158

You might also like