Abstracts for Logic and Computation Seminar Speakers

1996-1997


The Relational Knowledge-Base Interpretation and Feasible Theorem Proving for Intuitionistic Propositional Logic

Max I. Kanovich

Russian Humanities State University

A computational semantics is introduced for relational knowledge bases. Our semantics naturally arises from practical experience of relational databases and knowledge bases. We show that intuitionistic logic is an adequate logic behind the relational knowledge bases. In particular, based on a specific calculus without contraction, an efficient prove-disprove algorithm is designed: (a) the algorithm runs in linear deterministic space, (b) for every 'natural' problem, the algorithm runs in polynomial time, despite of the fact that, because of the PSPACE-completeness of propositional intuitionistic logic, an exponential execution time should be expected in the worst case. The point is that these worst problems need maximal cross-linking of all their possible subproblems.


The Operational Content of Intersection Types

Simonetta Ronchi della Rocca

University of Turin

Various aspects of the functional computation can be modeled through (various kind of) lambda calculus. For example, the call-by-name computation, the call-by-value one, the lazy one, the relevant one. An operational semantics can be defined for each one of these paradigmatic languages, starting from the definition of a set of "values", and a notion of convergency. A term converges if and only if it reduces (according to a given reduction strategy) to a value. A notion of operational equivalence between terms arises naturally, based on Leibnitz equality. Namely, two terms M and N are operationally equivalent if and only if every context behaves in the same way (with respect to the convergency) when filled by either M or N. Intersection type assignment systems give a tool for reasoning about operational equivalence of terms. We define a type assignment system being correct with respect to an operational equivalence if and only if the fact that two terms can be typed with the same types implies that they are operationally equivalent. If also the inverse implication holds the system will be called complete. For each one of the operational semantics induced by the before listed computations a correct type assignment system can be defined.


Computation Models Based on Linear Logic

Mitsuhiro Okada

Keio University

In this overview we discuss basic paradigms of traditional logical computation models for programming language theory, including the proof-search (logic-programming), reduction (typed-functional programming), and term-rewriting (equational programming) paradigms. When we modify those computation models by changing the underlying logic from a traditional logic to linear logic, we show how the modified (linear logic-based) computation models can capture the new important computational concepts such as concurrency, resource-sensitivity, etc. We also give some examples of logical analysis on process calculi, proof-normalization, etc., using the framework of those linear-logical computation models.


An Introduction to the Distributed Join Calculus

Peter Selinger

University of Pennsylvania

This talk will be a self-contained introduction to the distributed join calculus in the form of a tutorial.

As the internet is rapidly growing and new communication technologies are becoming accessible, issues arise that are not accounted for by the traditional models of computation. When computational agents can travel between different sites in a global network, then matters of security and the possibility of physical failure of connections and sites must be dealt with.

In order to address these issues in a mathematical context, one needs a programming paradigm that can express such concepts as locality and failure, while being conceptually as simple as possible. One such paradigm is the distributed join calculus, introduced in 1995 by Fournet and Gonthier [POPL'96, CONCUR'96]. The join calculus is a concurrent, message-passing calculus similar to Milner's pi-calculus, but more suitable for modeling locality.

In the talk, I will begin by introducing the join calculus and its semantics in terms of a chemical abstract machine. I will show examples and discuss the relationship to other calculi such as the pi-calculus. The material that I will present is taken from the two papers below. No previous knowledge of distributed calculi is necessary.


An Introduction to Program Checking

Sampath Kannan

University of Pennsylvania

Two traditional approaches for establishing program correctness are verification and testing. In verification we seek to prove that a program is correct and in testing we simply confirm that the program performs correctly on a number of (carefully) chosen inputs. Verification is difficult and testing does not produce satisfactory mathematical guarantees about the correctness of programs.

Program checking provides a third alternative. Here we seek to prove that the output of a program on any particular input is correct. A program checker is itself a program that is run each time the program being checked is run. Using randomness and interaction (with the program being checked) the checker produces probabilistic proofs of correctness of the output.

In this self-contained talk we will present a formal notion of checking, give examples of checkers, examine connections between checking and other notions in complexity theory, and explore extensions and modifications of the basic paradigm of checking.


The Security of Dynamic Linking with Static Typing

Drew Dean

Princeton University

Java combines (mostly) static type checking with dynamic linking. These features interact in security relevant ways. In this talk, we explore their interaction, discussing two major bugs that led to the complete penetration of the system. We then move to a formal model of dynamic linking in PVS, and formally prove a key safety property about our model. Other security-relevant "features" of Java will be discussed. This is joint work with Dirk Balfanz, Ed Felten, and Dan Wallach.


Relational Queries over Interpreted Structures

Leonid Libkin

Bell Labs

It is a classical result in relational database theory that no first-order query can express the property that the number of elements in a database is even. This is proved by a straightforward application of Ehrenfeucht-Fraisse games. If first-order expressions use the order relation on database elements, parity remains inexpressible, but the Ehrenfeucht-Fraisse argument is "exponentially harder". If database elements are real numbers and operations +,* are used in queries, the Ehrenfeucht-Fraisse argument cannot be done by an ordinary human being. In fact, it is not even immediately clear that parity is still inexpressible.

The question whether parity test can be defined as a first-order query with polynomial inequality constraints was asked by the late Paris Kanellakis when he introduced the constraint database model. In this talk, I'll review the constraint model which, for our purposes, can be seen as doing relational theory over interpreted structures, for example, the real field. I will also explain that the lack of ability to deal with interpreted operations is a major gap between theoretical and practical query languages. My main goal is to explain a new collection of techniques for extending the standard results of relational theory in such a setting.

In a more abstract way, this work can be viewed as doing finite model theory when finite models live inside some infinite structures. I demonstrate several techniques for combining the finite and the infinite, and prove several complexity and expressivity bound for various logics (first-order, second-order, fixpoint logics). By the end of the talk, you'll see at least two different proofs of the Kanellakis conjecture.

This is joint work with Michael Benedikt (Bell Labs), Guozhu Dong (U of Melbourne), and Limsoon Wong (ISS, Singapore).

Based upon:
[1] M. Benedikt, G.Dong, L.Libkin, L.Wong. Relational expressive power of constraint query languages. PODS'96.
[2] M. Benedikt, L.Libkin. On the structure of queries in constraint query languages. LICS'96.
[3] M. Benedikt, L.Libkin. Languages for relational databases over interpreted structures. In preparation.


Self-Certified Code

Peter Lee

Carnegie Mellon University

It is well known that it is easier to check a proof than to generate one. In this talk, I make use of this fact to show how an operating system can determine with certainty that it is safe to execute, in the kernel address space, a binary supplied by an untrusted source. The kernel first defines a safety policy and makes it public. Then, using this policy, an application provides binaries in a special form called Self-Certified Code, or simply SCC. Each SCC binary contains, in addition to the native code, a safety certificate that is a formal proof of the code's safety properties. The kernel can easily validate a certificate without using cryptography and without consulting any external trusted entities. If the validation succeeds, the code is guaranteed to respect the safety policy without relying on run-time checks.

The main practical difficulty of SCC is in generating the certificates. To see what is involved, I will describe our preliminary experience with a collection of standard network packet filters, written in hand-tuned DEC Alpha assembly language. Our SCC binaries are created automatically by a special prototype assembler, and can be executed with no run-time overhead, beyond a one-time cost of about 1 millisecond for validating the enclosed certificates. The net result is that our packet filters are formally guaranteed to be safe and are faster than packet filters created using Berkeley Packet Filters, Software Fault Isolation, or safe languages such as Modula-3. Finally, I will conclude with prospects for future work.


Linear Logic as a Logic Programming Language

Dale Miller

University of Pennsylvania and University of Genoa

In recent years, proof theory has been used to motivate and justify the design of a number of logic programming languages, including lambda Prolog and its linear logic refinement Lolli. Lambda Prolog and Lolli are sequential languages (lacking primitives for communication and synchronization) but contain rich forms of abstractions (including modular and higher-order programming). On the other hand, the linear logic programming language LO (Linear Objects) of Andreoli and Pareschi has primitives for concurrency but lacks abstractions. Since linear logic provides a foundation for each of these languages, it seems sensible to look inside it for a single logic programming language that contains these other languages. In this talk I describe the logic programming language Forum that modularly extends lambda Prolog, Lolli, and LO. In fact, Forum is a particular presentation of all of linear logic for which goal-directed search is complete. Forum contains primitives for concurrency and abstractions. Forum been used to provide specifications for the operational semantics of imperative and object-oriented programming constructs, sequent calculus proof systems for object-level logics, and a RISC-style, pipe-lined processor. I shall present several examples of Forum specifications. This talk is largely self-contained and assumes only familiarity with basic notions of the sequent calculus. [A home page for Forum can be found at http://www.cis.upenn.edu/~dale/forum/.]


Operational Modal Logic

Sergei Artemov

Steklov Mathematical Institute and Cornell University

Operational Modal Logic (OML) has been developed in order to fill a gap in the foundations of knowledge presentation, where such a basic notion as The Provability had been missing its complete modal description.

A "natural born" Goedel provability logic S4 (1933) had been lacking its exact intended interpretation. The provability logic (Solovay, 1979) axiomatizes a formal provability operator that cannot be regarded as a decent model for The Provability; for example, the basic reflection property

"Provable F" implies "F"
fails for the formal provability logic.

In the traditional modal logic the necessity operator $\Box F$ often stands for "there exists an evidence of F", thus hiding an existential quantifier behind the modality. We demonstrate that for the logic S4 the corresponding class of Skolem type functions over "evidences" admits a finite an observable operational basis: in fact, it suffices to use three operations only. This allows to reformulate S4 and some other classical modal logics in a purely operational manner where a modality occurrence "necessarily F" is realized as "t is an evidence of F" for a corresponding term t. In particular, there is an algorithm that recovers a term assignment to modalities in an S4 proof. So, OML provides a more rich language for the traditional modal logic.

This OML technique is applied to answer two old questions:

1. Modal logic S4, which was informally specified by Goedel in 1933 as a logic for The Provability, meets its exact intended interpretation.

2. Brouwer - Heyting - Kolmogorov realizing operations (1931-32) for intuitionistic logic Int also get exact interpretation as corresponding propositional operations on proofs; both S4 and Int turn out to be complete with respect to this proof realization.

The OML is nicely axiomatized and is shown to be decidable, arithmetically complete and sufficient to realize every operation on proofs admitting propositional specification in arithmetic. Hopefully, OML terms can become useful in many other applications of modal logic.


On Non-Determinism in Queries, Machines and Languages

Zoe Lacroix

University of Pennsylvania

There are different levels of non-determinism. A query is non-deterministic if it may have different possible outputs on a given input. For instance, the query "give me the name of a postdoc in IRCS" is a non-deterministic query (this year there there are 13 possible answers to that query). On one hand, a non-deterministic query is definable in a non-deterministic language, that is a language that admits a non-determinitic semantics (a formula may have different interpretations). On the other hand, non-deterministic queries are computable by non-deterministic Turing machines.

In this talk, I will present various levels of non-determinism in computations on non-deterministic Turing machines with polynomial bounds on the resources. Meanwhile, I will consider numerous query languages, implicit logic, choice logic, and establish correspondences between all these formalisms for both deterministic and non-deterministic queries. A paper is available at ftp://ftp.inria.fr/INRIA/Projects/verso/VersoReport-084.ps.Z


A Proof Method in Operational Semantics and its Application to Hybrid Theorem Proving

Doug Howe

Bell Labs

The usual notion of equivalence of programs is observational congruence: two programs are equivalent if one can be replaced by the other in any containing program without affecting the observable results of computation. Unfortunately, it can be quite difficult to directly use the operational semantics of a programming language to prove that two particular programs are equivalent under this definition, even for many "obvious" equalities. However, many languages admit a "coinductive" characterization of observational congruence that is much easier to work with. This characterization is a powerful tool for operational reasoning about program equivalence, and furthermore is valid for some languages which do not yet have a satisfactory denotational semantics.

The first part of this talk will focus on one of the main methods for proving that a programming language admits this kind of characterization. The method has been applied to a variety of programming languages, including functional and imperative languages, typed and untyped languages, and object-oriented languages.

The second part of the talk will cover a recent application of the method in establishing the semantic foundations for a "hybrid" interactive theorem prover. This hybrid system combines Nuprl and HOL, two systems with very different logics. Nuprl is a constructive type theory, where the basic semantic objects are untyped functional programs, while HOL is essentially a classical set theory. The method is used in giving an extension to Nuprl's semantics into which HOL's semantics can be directly embedded. The hybrid Nuprl/HOL system retains Nuprl's constructivity and expressive type system, and can draw on the vast store of formal mathematics developed by the HOL user community over the last decade. A prototype of the system has been implemented and is currently being used to verify a complex cache coherency protocol (among other things).


Adjustable Graphic-Based Clustering Method

LiWu Chang

Naval Research Laboratory

Clustering can be viewed as an inductive method that constructs a graph representation (i.e., the model) for a given data set with nodes standing for features, the unknown (referred to here as the hidden node) denoting the set of clusters and links indicating probabilistic relationships among features and the unknown. The hidden node is used to interpret unusual correlation embedded in a set of samples. In this talk, I'll describe the clustering system I have developed for experimenting with data organization from two aspects: The first aspect is about the properties of the proposed clustering method and the second aspect is on graphic-based interpretation. In particular, we consider the following issues: One issue that plagues existing clustering methods is the inefficiency of handling large data sets. This problem exists in both top-down and bottom-up clustering search. The proposed method combinatively uses different modes of clustering and handles the new datum through iterative adjustment. The second issue is the evaluation of clustering criteria for the generated cluster hierarchy. The evaluation is carried out on the basis of statistical properties of the graphic model. To enhance the clarity in interpretation, we organize instances according to highly correlated features. The merits of using a graph model will shown with an example.


A Brief Tour of Net Reductions, Geometry of Interaction, and Optimal Implementations of Functional Programming Languages

Stefano Guerrini

University of Pennsylvania

In the 1970's Levy gave a lower bound to the cost of the implementation of an higher-order FP language by its study on lambda calculus optimal reductions. He foresaw that the lower bound would have been reachable by a clever implementation of sharing, but he could not give an algorithm. Only after more than ten years Lamping was able to give a brilliant solution to Levy's outlook.

Lamping's breakthrough is a sophisticated graph reduction technique to implement the sharing of lambda terms contexts. As shown more recently by Gonthier, Abadi and Levy, the remarkable point of Lamping's algorithm is that it surprisingly relates with Girard's Linear Logic and with Girard's research on the mathematical foundations of operational semantics through the so-called "Geometry of Interaction" (GOI).

This lecture series will not be overcharged with too many technical details. It aims at providing a smooth introduction to the fascinating subject of GOI and optimal reductions, showing some of the main results of them and a taste of the really intriguing nature of optimality: a nice, tough mathematical theory applied to a very practical problem.

The series will consist of two talks:

I. Nets: linear logic proof-nets and pure proof-nets for the lambda calculus. Correctness criteria. Dynamics: can we eliminate the use of boxes and give graph rewriting system with no global rules?

II. Sharing and optimal reductions. Paths and GOI. Local and asynchronous implementation of the beta reduction.


Directions in Object-Oriented Programming Languages: Type Systems and Language Development

Kathleen Fisher

AT&T Research

Modern programming projects involve hundreds of programmers and millions of lines of code. Object-oriented programming languages (OOPL's) offer several powerful language features to help manage the difficulties of such large-scale programming. Not surprisingly, these powerful mechanisms interact in subtle ways, making good language design difficult.

One of the most problematic issues has been the relationship between subtyping and inheritance, two code-reuse mechanisms. Existing languages combine the mechanisms, limiting subtyping to those object types related via inheritance. This limitation greatly restricts the subtype polymorphism of such languages, preventing code reuse. It also forces programmers to use complex multiple inheritance mechanisms to produce desired subtyping relationships. Past studies have pointed out that subtyping and inheritance are conceptually distinct, suggesting they should be separated. In this talk, I will argue that the confusion regarding the relationship between subtyping and inheritance stems from the fact that there are two different forms of object type being used here. "Interface types", used in analytical studies, specify only the messages their objects understand. Such types are useful because they support a high degree of subtype polymorphism, enabling code reuse. In contrast, "implementation types", used in practice, constrain hidden aspects of their objects' implementations, limiting polymorphism but enabling more efficient message sending and more useful binary methods.

In this talk, I will present a provably sound type-theoretic foundation for object-oriented languages that includes both interface and implementation types, allowing the benefits of both forms of object type to be realized in the same language. This framework reveals that although subtyping and inheritance are distinct for interface types, they are unified for implementation types. The system also suggests that implementation types may be viewed as *subtypes* of the interface types obtained by "forgetting" the implementation constraints. This subtyping relationship enables objects to be manipulated efficiently in contexts that know their implementation constraints, while allowing them to be treated polymorphically in contexts that know only interface information.


Almost surely consensus

Gregory McColm

University of South Florida

Random graphs of various kinds are being used in many places these days. Theoretical biologists are looking at random subgraphs of hypercubes, statisticians are looking at random graphs in Euclidean space, and computer scientists are deep in the traditional Erd\"os-R\'enyi model. And so on. In many such models a nice graphical property will be almost surely true (or almost surely false) on large graphs, where `nice' means `definable in some nice logic': we say that this model has a `zero-one law' for that nice logic. We will look at some useful models that tend to produce sprawling, somewhat homogeneous graphs, and show how to use `pebble games' to establish zero-one laws. We will also talk about random graphs and the meaning of life.


Forcing and computational complexity

Gaisi Takeuti

University of Illinois and University of Pennsylvania

In this talk I will first give an expository mini course on forcing, nonstandard models of arithmetic and bounded arithmetic. I will not assume any knowledge about them.

Then I will show that forcing on nonstandard model of bounded arithmetic is closely related to the computational complexity problems, e.g., P=NP problem, and fundamental one-way function problems in cryptography.


Comparing Object Encodings

Benjamin C. Pierce

Indiana University

The past five years have seen an explosion of foundational models for statically typed object-oriented programming. Many have been presented as translations from high-level object notations to typed lambda-calculi with subtyping, differing in technical details but sharing underlying intuitions. This tutorial lecture develops a uniform notational framework -- a typed lambda-calculus with subtyping, higher-order polymorphism, existential types, and recursive types -- and uses it to compare and contrast four previously studied models.

[Joint work with Kim Bruce and Luca Cardelli.]


Type inference in systems of recursive types

Trevor Jim

University of Pennsylvania

We present a new method for performing type inference in languages with recursive types and subtyping. The method has been applied to a class of lambda-calculus-based type systems, as well as an object type system with record subtyping and variance annotations. A key technical detail is that we define the subtyping relation using simulations, which leads to a much simpler algorithm for subtyping than previously known. This is joint work with Jens Palsberg.


Pizza into Java: Translating theory into practice

Philip Wadler

Bell Labs

Pizza is a strict superset of Java that incorporates three ideas from the functional programming community: parametric polymorphism, higher-order functions, and algebraic data types. Pizza is defined by translation into Java and compiles into the Java Virtual Machine, requirements which strongly constrain the design space. Nonetheless, Pizza fits smoothly to Java, with only a few rough edges. This is joint work with Martin Odersky.


Meta-Pi: A Family of Specification Languages

Sam Weber

University of Pennsylvania

A number of specification languages have been designed to aid in the modelling of communication. One problem that users of such languages frequently face is that the language doesn't provide all the operations desired for a particular problem. Simulating these operations results in unwieldy specifications, while extending the language requires extensive proofs that the resulting language is sensible.

We address this issue by proposing the Meta-Pi family of specification languages. Meta-Pi is a meta-language: a language in which you construct other languages. In order to define a Meta-Pi language, one describes all the operations one desires using a particular rule format. Any such language is guaranteed to possess essential properties, such as respecting bisimulation. The rule format is entirely syntactic -- one can determine whether an operation is in the Meta-Pi format statically. This allows a specification language to be easily tailored to a problem.


Coherence completions of categories and game semantics for linear logic

Hongde Hu

University of Pennsylvania

Since linear logic was introduced by Jean-Yves Girard in 1987, many different approaches of semantic paradigms for linear logic have been developed, but no canonical theory has emerged. In this two-lecture series we present coherence completions of categories, with the aim of developing a unified theory encompassing various semantics of linear logic. We will first discuss all necessary background from lattice theory and from category theory.

The starting point of the present work is based on the softness in coherence spaces. The concept of softness appeared in the study of free bicomplete categories as a categorical generalization of Whitman's condition on lattice theory. Softness is a structural rule concerning morphisms from limits into colimits. In the case of lattice theory, limits and colimits are viewed as meets and joins, respectively. In coherence spaces softness has the the following consequence: each linear morphism from the conjunction (product) into the disjunction (coproduct) factors throught either a product projection or a coproduct injection. We show that the full subcategory of the category of coherence spaces whose objects are generated from the singleton space under products and coproducts is exactly the free bicomplete category of the singleton under the zero object and products and coproducts.

The major step we take here is to extend coherence spaces to coherence completions of categories. We show that if a category is a model of linear logic, then so is its coherence completion. In order to interprete the connectives of linear logic in the coherence completion, we present the coherence completions as a monad on the category of categories and functors.

A key idea explored in the present work is the enriched softness between products and coproducts. This important feature is directly related to the associativity of winning strategies of games. We show that the enriched softness provides a convenient basis for game semantics.

Parts of this work are based on collaboration with André Joyal.


A Logic for Reasoning with Higher-Order Abstract Syntax

Raymond McDowell

University of Pennsylvania

Logical frameworks based on intuitionistic or linear logics with higher-type quantification have been successfully used to give high-level, modular, and formal specifications of many important judgements in the area of programming languages and inference systems. Given such specifications, it is natural to consider proving properties about the specified systems in the framework: for example, given the specification of evaluation for a functional programming language, prove that the language is deterministic or that the subject-reduction theorem holds. One challenge in developing a framework for such reasoning is that higher-order abstract syntax (HOAS), an elegant and declarative treatment of object-level abstraction and substitution, is difficult to treat in proofs involving induction. In this talk I will present a meta-logic that can be used to reason about judgements coded using HOAS; this meta-logic is an extension of a simple intuitionistic logic that admits quantification at higher types and simply typed lambda-terms (key ingredients for HOAS) as well as induction and a notion of definition. The latter concept of a definition is a proof-theoretic device that allows certain theories to be treated as "closed" or as defining fixed points. The resulting meta-logic can specify various logical frameworks and a large range of judgements regarding programming languages and inference systems. I will illustrate this point through examples, including the admissibility of cut for a simple logic and subject reduction, determinacy of evaluation, and the equivalence of SOS and natural semantics presentations of evaluation for a simple functional programming language.