Abstracts for Logic and Computation Seminar Speakers

1997-1998

UPCOMING TALKS


PREVIOUS TALKS


September 8

Mechanical Analysis of a Time-Triggered Group Membership Protocol

Patrick D. Lincoln

SRI International

A group membership protocol derived from one used for critical control functions in automobiles is presented along with its analysis using the mechanical tools Murphi and PVS. As for any group membership protocol, here nonfaulty processors must identify and remove faulty processors from their view of the group while maintaining agreement. This is accomplished with very little overhead by Kopetz' TTP protocol in a synchronous time-triggered model of computation with processors broadcasting in turn in a ring. Under certain assumptions, this protocol guarantees that a faulty processor will be promptly diagnosed and removed from the agreed group of processors, and will also diagnose itself as faulty if it is so. The protocol is correct under a fault-arrival assumption that new faults arrive at least n+1 time units apart, when there are n processors. Exploiting this assumption leads to unusual real-time reasoning in the correctness proof. In this talk TTP and its correctness argument are compared to OM(1), the traditional Byzantine agreement protocol, and its mechanically checked proof. This is joint work with Shmuel Katz and John Rushby.


September 29

Hofmann's mixed modal/linear typed lambda calculus and polynomial time

Andre Scedrov

University of Pennsylvania

A type system based on intuitionistic S4 modal logic was introduced by Davies and Pfenning (POPL '96) and shown to provide an expressive framework for specifying and analyzing computation stages in the context of functional languages. A variant of this type system, extended with certain linear logic types and with a subtyping relation, is used by Hofmann (CSL '97) to express in the context of typed lambda calculus the notions of safe recursion and polynomial-time functional expressions studied by Bellantoni and Cook. Hofmann's proof that typable numerical functions are polynomial-time computable is of a purely semantic nature and it involves a comonad on a presheaf category that forces the desired property. Hofmann's paper and his prototype implementation of the modal lambda calculus can be accessed at www.mathematik.th-darmstadt.de/~mh. This talk will be an expository, chalk talk.


October 6

Symbolic analysis of hybrid systems

Rajeev Alur

University of Pennsylvania

A hybrid system is a dynamical system whose behavior exhibits both discrete and continuous change. A hybrid automaton is a mathematical model for hybrid systems, which combines, in a single formalism, automaton transitions for capturing discrete change with differential equations for capturing continuous change. In this survey talk, I will illustrate symbolic algorithms for the verification of and controller synthesis for linear hybrid automata, a subclass of hybrid automata that can be analyzed automatically. I will conclude by discussing opportunities for future projects.


October 20

Proving the Termination of Mutually Recursive Procedures

Peter Homeier

University of Pennsylvania

How can one know that a program with mutually recursive procedures will terminate? Proofs of the partial correctness of such programs were studied by Hoare in the early 1970's, but their termination has been less intensely investigated. The literature does present methods for proving termination which suffice for some programs, but may involve unnatural and complex artifacts for many normal programs. In addition, these methods require a large amount of ad-hoc structure for each individual program's proof. Also, the methods uniformly depend on Hoare's original framework for program proofs involving recursion, which involves meta-level concepts which have been questioned as difficult to understand and possibly not well founded.

We present a new technique for proving the termination of programs with mutually recursive procedures which is naturally applicable to a greater range of programs, and provides more structure, reducing ad-hoc elements. We replace Hoare's rules for recursive procedure definition and call by a combination of new program logics for establishing progress and termination, without any appeal to meta-level concepts. This new technique provides a more general yet natural approach to proofs of termination, that also supports mechanical implementation.


October 27

Higher dimensional categories: a logician's view

Michael Makkai

McGill University

Among the competing approaches to defining higher dimensional categories, I favor a modified version of the definition recently given by J. Baez and J. Dolan. A self-contained treatment of the modified definition, albeit with the end-piece deferred to a sequel, is given in a paper by C. Hermida, J. Power and myself (which is electronically available). The reason for my preference for the said approach is its relations to a general theory of invariance under concepts of equivalence called First Order Logic with Dependent Sorts (a monograph, also available electronically, whose revised form is soon to be published). The theory serves the purposes of a program of a new, "structuralist", foundation for mathematics, in which higher dimensional categories form the back-bone of the universe of discourse. The talk will have to be limited to only a few of the technical details; on the other hand, I will try to give the philosophical/foundational motivation.


October 30, 3:00-4:30 in Moore 216

Local type inference

Benjamin C. Pierce

Indiana University

We study two partial type inference methods for a language combining subtyping and impredicative polymorphism. Both methods are *local* in the sense that missing annotations are recovered using only information propagated from adjacent nodes in the syntax tree. One method infers type arguments in polymorphic applications using a local constraint solver. The other infers annotations on bound variables by propagating type constraints downward from enclosing application nodes.

We motivate our design choices by a statistical analysis of the uses of type inference in a sizable body of existing ML code.

[Joint work with David N. Turner]


November 10

The Secure Lambda Calculus: Programming with Integrity and Secrecy

Jon Riecke

Lucent Technologies

Statically typed programming languages prevent programmers from misusing data, e.g., from using an integer as a function. In the realm of security, there are other potential misuses of data: giving access to a salary database to all employees, or allowing a user to find out the secret key of another user.

To prevent these programming errors, we have been investigating a language called the Secure Lambda (SLam) calculus. The SLam calculus is a statically typed language that maintains security information as well as type information on data objects. The type system uses two classic ideas from the operating systems and security communities: "access control" and "information flow." Access control prohibits certain agents from directly creating or reading an object; information flow control limits the agents who, through flow-of-control or the actions of other agents, can influence or be influenced by the content of the object. We formulate correctness criteria for the calculus using traditional ideas from programming language theory, prove that the type system prevents security violations, and give some examples of the power of the system.

This is joint work with Nevin Heintze.


November 17

Symbolic model-checking systems with unbounded integer variables

Tevfik Bultan

University of Maryland

Symbolic model-checking has proved highly successful for large finite-state systems, in which states can be compactly encoded using binary decision diagrams (BDDs) or their variants. The inherent limitation of this approach is that it cannot be applied to systems with infinite number of states -- even those with a single unbounded integer.

In this talk, we discuss our progress in extending model-checking to unbounded integer domains. First, we outline our prototype integer-based model-checker, which uses Presburger constraints as its underlying state representation. We show how this approach was able to verify some well-known concurrency problems.

Next, we present our recent work in composite analysis, which combines the strengths of both (1) BDD model-checking (for Boolean and enumerated types), and (2) Presburger constraints (for integer types). Using these techniques, a system's valuations are encoded via both representations, where the choice depends on the variables used. We demonstrate our method's effectiveness on a SCR requirements specification, which includes a mixture of Booleans, integers and enumerated types.


December 1

A Reference Model for Requirements and Specifications

Carl Gunter

University of Pennsylvania

One of the key problems faced in the practical adoption of formal methods arises when such methods are usable only at specific stages of the software engineering process and must work with a specific form of data. At AT&T and Bell Labs researchers have seen a number of instances where tools like model checkers might have been useful in a project if the kinds of specifications on which such tools work were available. A key problem is that projects do not typically use formal methods in the development of specifications, so the information on which a formal method might be used is unavailable, or would be very expensive to obtain. Indeed, once a project has chosen not to use formal language early in the development of requirements and specifications for software, it is difficult (or often impossible for all practical purposes) to introduce such formality at a later stage. This suggests that it is essential to find ways in which formal language can be introduced at the earliest stages of software requirements capture.

In this project we aim to describe a form of REFERENCE MODEL for the treatment of requirements and specifications. Reference models are a familiar concept from standards where they are used as a basis for comparison and guidence in the structuring of a system. In the case of formal methods, the development of a general reference model will be useful in providing a methodology for organizing theories related to a development project. If done correctly, this will ease the task of early incorporation of formal methods.

Our reference model focuses on the classification of SHARED PHENOMENA in the development of computer software. We are working on developing this methodology with the use of the Higher-Order Logic (HOL) theorem proving system. In this context, our methodology entails the creation of a collection of components:

In the ideal case, except for the designated terminology, these are all structured as HOL theories. The names in the designated terminology are classified according to VISIBILITY and CONTROL relative to the machine and the environment, and this classification is used to generate a basic collection of relationships between the components. In particular, We have been working on developing HOL to support this methodology effectively through the analysis of case studies. Our first case study concerns what we call the `village telephone system' (VTS), which is a simple communication system involving telephones that experience events such as going off-hook, alerting, and being connected. This study reveals a collection of points about the capabilities and limits of the HOL system for this sort of application, and the importance of distinctions like visibility and control in formulating basic relationships. We have a nearly-complete analysis of VTS using HOL. More recently we have been working toward a much larger problem involving the translation of instructions between the Lucent 1600 and 16000 Digital Signal Processors (DSP's). Here requirements involve the preservation of certain kinds of observable phenomena, and our program will be coded in SML/NJ to move legacy software from the old 1600 processor to the new 16000. Our aim is to show that an orderly formal analysis is competitive with the cycle of coding, testing, and debugging that would be needed to establish sufficient confidence for ported applications between these processors.

Information about our project, including papers and software can be obtained from the HOL home page at http://www.cis.upenn.edu/~hol.

Joint work with Elsa L. Gunter, Bell Labs, Michael Jackson, AT&T Laboratories, and Pamela Zave, AT&T Laboratories


December 8

Higher-Order UnCurrying

John Hannan and Patrick Hicks

Pennsylvania State University

We present a formal specification of unCurrying for a higher-order, untyped functional language. This specification supports the general unCurrying of functions, even for functions which are passed as arguments or returned as values. The specification also supports partial unCurrying of any consecutive parameters of a function, rather than only complete unCurrying of a function's parameters. We present the specification as a deductive system bearing some resemblance to a type system. We also define a practical algorithm, based on algorithm W, which implements the unCurrying. This work provides a general framework for reasoning about unCurrying, independent of a particular algorithm, and supports a richer form of unCurrying than is currently found in compilers for functional languages. The specification also provides a basis for proving implementations of unCurrying to be sound and complete.


February 9

SML '97 and Beyond

David MacQueen

Bell Labs

After ten years of experience with design, definition, implementation and use of Standard ML, a rough concensus developed about what worked well and what needed fixing. At Cambridge in the fall of 1995, Robin Milner, Mads Tofte, Bob Harper and myself started work on a revised version of the language, and in parallel a group representing SML/NJ, Harlequin MLWorks, and Moscow ML were working on a major revision of the Basis library. The result is called SML '97.

I will discuss the major language changes and their rationale and cover the high points of the new basis. SML '97 is implemented by SML/NJ 110, a new release of the Standard ML of New Jersey compiler.

I will also give a brief preview of things to come.


March 30

Testing of Approximate Correctness of Programs

Funda Ergun

University of Pennsylvania

In this talk I will discuss how to check programs in the realistic setting where the output of the program is approximate instead of exact. In other words, the requirement from the program is that its output be not too far from the correct value. This setting is a step in making the tests more realistic since exact correctness is usually too high an expectation, especially for numerical computations, where one has to deal with accuracy issues stemming from finite representation, iterative methods, and irrational numbers. I will talk about the general approach to this problem and how to use techniques from algebra, numerical analysis etc. to analyze this approach for computations of various classes of functions, as well as an example of how the technique does not work as expected in every case.



April 6

Shifting Foundations

Peter Freyd

University of Pennsylvania

The role of foundations in computer science is quite different from their traditional role in mathematics and that has led to a long list of difficulties. When those difficulties are resolved, there are enticing possiblities of real applications -- not to pure but applied mathematics.


April 13

A Calculus for Collections and Aggregates

Val Tannen

University of Pennsylvania

We present a calculus that should play for database query languages the same role that the lambda calculus plays for functional programming. For the semantic foundations of the calculus we introduce a new concept: monads enriched with algebraic structure. We model collection types through enriched monads and aggregate operations through enriched monad algebras. The calculus derives program equivalences that underlie a good number of the optimizations used in query languages. Joint work with Kazem Lellahi, Université de Paris 13


April 27

Concurrent games and full completeness

Samson Abramsky

We present a model, or family of models for Linear Logic, which can be seen as a concurrent version of game semantics. The formmalization of this model uses domain theory. There is also a general construction of ``Linear PER's'', which can be seen as a suitable adaptation of realizability to the setting of (Classical) Linear Logic.

The main result is Full Completeness for Multiplicative-Additive Linear Logic. The construction also leads to a fully abstract model for PCF.


May 4

Proof-Theoretic Foundation of Compilation in Logic Programming Languages

Iliano Cervesato

Commercial implementations of logic programming languages are engineered around a compiler based on Warren's Abstract Machine (WAM) or a variant of it. In spite of various correctness proofs, the logical machinery relating the proof-theoretic specification of a logic programming language and its compiled form is still poorly understood. In this paper, we propose a logic-independent definition of compilation for logic programming languages. We apply this methodology to derive the first cut of a compiler and the corresponding abstract machine for the language of hereditary Harrop formulas and then for its linear refinement.

URL: http://www.stanford.edu/~iliano/papers/jicslp98.ps.gz

Return to the Logic and Computation Group Page

Comments about this page can be sent to tjim@saul.cis.upenn.edu.