November 1 - 5, 1997
Hyatt Regency Lake Tahoe
Incline Village, Nevada; USA
We will survey approaches to formal software development which are based on category theory. These approaches started in the early 70s and, after decades of research, are now becoming the basis of modern tools and techniques. These approaches have the following two characteristics:
A key result of the research on theory-based software development is that the modularity and refinement mechanisms are largely independent of the language and logic in which the theories are written. This independence is accomplished by casting the relevant concepts---theories, interconnections, refinement, and even logics themselves---in the abstract mathematical language of category theory.
As a result, it is possible to build formal development environments supporting the processes above by implementing their mathematical basis, category theory. The speakers have pioneered the transition of research results into tools and believe that tools of the future will have strong mathematical foundations.
This tutorial will be of benefit to practitioners who are interested in understanding the basis of modularity and refinement, and theoreticians who are interested in understanding how diverse research results can be combined into coherent development methodologies and tools.
This tutorial will describe and compare a number of techniques that have recently been applied to automated analysis of requirements models, including model checking, consistency analysis and requirements traceability analysis. We will describe the techniques, and illustrate their use on a number of NASA case studies. We will concentrate particularly on lightweight formal methods, which can be applied by a Verification and Validation (V&V) team independently of the development methodology chosen by the development team.
We will start with a brief overview of the current state of the practice of software V&V, and the role of an independent team for V&V of high assurance systems. V&V is an engineering practice aimed at reducing the risk of software failure. By applying analysis techniques throughout the development of a software system, a V&V team can help to identify and remove errors before the testing stage, thereby reducing development cost and increasing reliability. As the majority of safety-related errors in software can be traced to requirements errors, the area of requirements analysis is receiving increased attention.
Current practice in requirements verification and validation relies extensively on manual techniques, primarily inspection, to identify potential errors. Our aim in this tutorial is to introduce a number of automated techniques that can help to reduce the cost of analysis, and to increase analysis turn-around in rapid, incremental development environments. We focus on new methods for requirements analysis that include:
In deductive program synthesis, we exploit automated deduction techniques to solve problems in program construction. At our most ambitious, we regard the entire software production process as a task of proving a mathematical theorem.
A program's specification is represented as a relationship between a given input and a desired output, expressed as a logical sentence. The specification can be phrased as a theorem that states the existence of an output that meets the specified conditions. The theorem is proved automatically and constructively; in other words, to establish the existence of the desired output, we are forced to discover a method for finding it. That method becomes the basis for a program that can be extracted directly from the proof. Because the proof shows that the extracted program does meet its specification, no further verification is necessary.
Once regarded as an academic toy, deductive program synthesis has begun to have practical impact on software practice. In NASA's Amphion system, software is composed from subroutine libraries by deductive methods; the system has been used by JPL planetary astronomers. In Kestrel's Specware system, automatic theorem-proving methods are used to construct smaller components of larger systems.
In this tutorial, we present the basic ideas of deductive program synthesis, illustrated by applications. No prior familiarity with logic or theorem proving is expected.
Planware is a domain-specific generator of high-performance scheduling software, currently being developed at Kestrel Institute. Architecturally, Planware is an extension of the Specware system, also under development at Kestrel. Specware is an environment for composing formal specifications, and for refining them to code. Specware is based on concepts from higher-order logic and category theory. Planware and Specware embody theoretical developments stemming from Kestrel's experience with previous systems such as KIDS and DTRE.
Planware has the following features
Specific topics to be covered include:
The theme of domain-specific versus domain-independent synthesis will run through the tutorial. Except for the construction of a domain theory and problem specification, all of the operations in the Planware design process are domain-independent. Planware can be viewed as a paradigm for how to build domain-specific program generators by means of domain-specific control of (1) domain-dependent knowledge, and (2) domain-independent synthesis operations. The presence of domain-independent operations allows the user to develop software that falls outside the boundaries of the domain, albeit with less automation.
Prerequisites: No background in category theory is required -- the few categorical concepts that we use will be introduced. Some familiarity with logic and theorem proving will be assumed.