As the internet is rapidly growing and new communication technologies are becoming accessible, issues arise that are not accounted for by the traditional models of computation. When computational agents can travel between different sites in a global network, then matters of security and the possibility of physical failure of connections and sites must be dealt with.

In order to address these issues in a mathematical context, one needs a programming paradigm that can express such concepts as locality and failure, while being conceptually as simple as possible. One such paradigm is the distributed join calculus, introduced in 1995 by Fournet and Gonthier [POPL'96, CONCUR'96]. The join calculus is a concurrent, message-passing calculus similar to Milner's pi-calculus, but more suitable for modeling locality.

In the talk, I will begin by introducing the join calculus and its semantics in terms of a chemical abstract machine. I will show examples and discuss the relationship to other calculi such as the pi-calculus. The material that I will present is taken from the two papers below. No previous knowledge of distributed calculi is necessary.

Program checking provides a third alternative. Here we seek to prove that the output of a program on any particular input is correct. A program checker is itself a program that is run each time the program being checked is run. Using randomness and interaction (with the program being checked) the checker produces probabilistic proofs of correctness of the output.

In this self-contained talk we will present a formal notion of checking, give examples of checkers, examine connections between checking and other notions in complexity theory, and explore extensions and modifications of the basic paradigm of checking.

The question whether parity test can be defined as a first-order query with polynomial inequality constraints was asked by the late Paris Kanellakis when he introduced the constraint database model. In this talk, I'll review the constraint model which, for our purposes, can be seen as doing relational theory over interpreted structures, for example, the real field. I will also explain that the lack of ability to deal with interpreted operations is a major gap between theoretical and practical query languages. My main goal is to explain a new collection of techniques for extending the standard results of relational theory in such a setting.

In a more abstract way, this work can be viewed as doing finite model theory when finite models live inside some infinite structures. I demonstrate several techniques for combining the finite and the infinite, and prove several complexity and expressivity bound for various logics (first-order, second-order, fixpoint logics). By the end of the talk, you'll see at least two different proofs of the Kanellakis conjecture.

This is joint work with Michael Benedikt (Bell Labs), Guozhu Dong (U of Melbourne), and Limsoon Wong (ISS, Singapore).

Based upon:

[1] M. Benedikt, G.Dong, L.Libkin, L.Wong. Relational expressive power
of constraint query languages. PODS'96.

[2] M. Benedikt, L.Libkin. On the structure of queries in constraint
query languages. LICS'96.

[3] M. Benedikt, L.Libkin. Languages for relational databases over
interpreted structures. In preparation.

The main practical difficulty of SCC is in generating the certificates. To see what is involved, I will describe our preliminary experience with a collection of standard network packet filters, written in hand-tuned DEC Alpha assembly language. Our SCC binaries are created automatically by a special prototype assembler, and can be executed with no run-time overhead, beyond a one-time cost of about 1 millisecond for validating the enclosed certificates. The net result is that our packet filters are formally guaranteed to be safe and are faster than packet filters created using Berkeley Packet Filters, Software Fault Isolation, or safe languages such as Modula-3. Finally, I will conclude with prospects for future work.

A "natural born" Goedel provability logic S4 (1933) had been lacking its exact intended interpretation. The provability logic (Solovay, 1979) axiomatizes a formal provability operator that cannot be regarded as a decent model for The Provability; for example, the basic reflection property

In the traditional modal logic the necessity operator $\Box F$ often stands for "there exists an evidence of F", thus hiding an existential quantifier behind the modality. We demonstrate that for the logic S4 the corresponding class of Skolem type functions over "evidences" admits a finite an observable operational basis: in fact, it suffices to use three operations only. This allows to reformulate S4 and some other classical modal logics in a purely operational manner where a modality occurrence "necessarily F" is realized as "t is an evidence of F" for a corresponding term t. In particular, there is an algorithm that recovers a term assignment to modalities in an S4 proof. So, OML provides a more rich language for the traditional modal logic.

This OML technique is applied to answer two old questions:

1. Modal logic S4, which was informally specified by Goedel in 1933 as a logic for The Provability, meets its exact intended interpretation.

2. Brouwer - Heyting - Kolmogorov realizing operations (1931-32) for intuitionistic logic Int also get exact interpretation as corresponding propositional operations on proofs; both S4 and Int turn out to be complete with respect to this proof realization.

The OML is nicely axiomatized and is shown to be decidable, arithmetically complete and sufficient to realize every operation on proofs admitting propositional specification in arithmetic. Hopefully, OML terms can become useful in many other applications of modal logic.

In this talk, I will present various levels of non-determinism in computations on non-deterministic Turing machines with polynomial bounds on the resources. Meanwhile, I will consider numerous query languages, implicit logic, choice logic, and establish correspondences between all these formalisms for both deterministic and non-deterministic queries. A paper is available at ftp://ftp.inria.fr/INRIA/Projects/verso/VersoReport-084.ps.Z

The first part of this talk will focus on one of the main methods for proving that a programming language admits this kind of characterization. The method has been applied to a variety of programming languages, including functional and imperative languages, typed and untyped languages, and object-oriented languages.

The second part of the talk will cover a recent application of the method in establishing the semantic foundations for a "hybrid" interactive theorem prover. This hybrid system combines Nuprl and HOL, two systems with very different logics. Nuprl is a constructive type theory, where the basic semantic objects are untyped functional programs, while HOL is essentially a classical set theory. The method is used in giving an extension to Nuprl's semantics into which HOL's semantics can be directly embedded. The hybrid Nuprl/HOL system retains Nuprl's constructivity and expressive type system, and can draw on the vast store of formal mathematics developed by the HOL user community over the last decade. A prototype of the system has been implemented and is currently being used to verify a complex cache coherency protocol (among other things).

Lamping's breakthrough is a sophisticated graph reduction technique to implement the sharing of lambda terms contexts. As shown more recently by Gonthier, Abadi and Levy, the remarkable point of Lamping's algorithm is that it surprisingly relates with Girard's Linear Logic and with Girard's research on the mathematical foundations of operational semantics through the so-called "Geometry of Interaction" (GOI).

This lecture series will not be overcharged with too many technical details. It aims at providing a smooth introduction to the fascinating subject of GOI and optimal reductions, showing some of the main results of them and a taste of the really intriguing nature of optimality: a nice, tough mathematical theory applied to a very practical problem.

The series will consist of two talks:

I. Nets: linear logic proof-nets and pure proof-nets for the lambda calculus. Correctness criteria. Dynamics: can we eliminate the use of boxes and give graph rewriting system with no global rules?

II. Sharing and optimal reductions. Paths and GOI. Local and asynchronous implementation of the beta reduction.

One of the most problematic issues has been the relationship between subtyping and inheritance, two code-reuse mechanisms. Existing languages combine the mechanisms, limiting subtyping to those object types related via inheritance. This limitation greatly restricts the subtype polymorphism of such languages, preventing code reuse. It also forces programmers to use complex multiple inheritance mechanisms to produce desired subtyping relationships. Past studies have pointed out that subtyping and inheritance are conceptually distinct, suggesting they should be separated. In this talk, I will argue that the confusion regarding the relationship between subtyping and inheritance stems from the fact that there are two different forms of object type being used here. "Interface types", used in analytical studies, specify only the messages their objects understand. Such types are useful because they support a high degree of subtype polymorphism, enabling code reuse. In contrast, "implementation types", used in practice, constrain hidden aspects of their objects' implementations, limiting polymorphism but enabling more efficient message sending and more useful binary methods.

In this talk, I will present a provably sound type-theoretic foundation for object-oriented languages that includes both interface and implementation types, allowing the benefits of both forms of object type to be realized in the same language. This framework reveals that although subtyping and inheritance are distinct for interface types, they are unified for implementation types. The system also suggests that implementation types may be viewed as *subtypes* of the interface types obtained by "forgetting" the implementation constraints. This subtyping relationship enables objects to be manipulated efficiently in contexts that know their implementation constraints, while allowing them to be treated polymorphically in contexts that know only interface information.

Then I will show that forcing on nonstandard model of bounded arithmetic is closely related to the computational complexity problems, e.g., P=NP problem, and fundamental one-way function problems in cryptography.

[Joint work with Kim Bruce and Luca Cardelli.]

We address this issue by proposing the Meta-Pi family of specification languages. Meta-Pi is a meta-language: a language in which you construct other languages. In order to define a Meta-Pi language, one describes all the operations one desires using a particular rule format. Any such language is guaranteed to possess essential properties, such as respecting bisimulation. The rule format is entirely syntactic -- one can determine whether an operation is in the Meta-Pi format statically. This allows a specification language to be easily tailored to a problem.

The starting point of the present work is based on the softness in coherence spaces. The concept of softness appeared in the study of free bicomplete categories as a categorical generalization of Whitman's condition on lattice theory. Softness is a structural rule concerning morphisms from limits into colimits. In the case of lattice theory, limits and colimits are viewed as meets and joins, respectively. In coherence spaces softness has the the following consequence: each linear morphism from the conjunction (product) into the disjunction (coproduct) factors throught either a product projection or a coproduct injection. We show that the full subcategory of the category of coherence spaces whose objects are generated from the singleton space under products and coproducts is exactly the free bicomplete category of the singleton under the zero object and products and coproducts.

The major step we take here is to extend coherence spaces to coherence completions of categories. We show that if a category is a model of linear logic, then so is its coherence completion. In order to interprete the connectives of linear logic in the coherence completion, we present the coherence completions as a monad on the category of categories and functors.

A key idea explored in the present work is the enriched softness between products and coproducts. This important feature is directly related to the associativity of winning strategies of games. We show that the enriched softness provides a convenient basis for game semantics.

Parts of this work are based on collaboration with André Joyal.