Penn Logic and Computation Seminar 2000-2001

The logic and computation group is composed of faculty and graduate students from the computer and information science, mathematics, and philosophy departments, and participates in the Institute for Research in the Cognitive Sciences.

The logic and computation group runs a weekly seminar0001. The seminar meets regularly during the school year on Mondays at 4:30 p.m. in room 554 of the Moore Building at the University of Pennsylvania. Any changes to this schedule will be specifically noted. The seminar is open to the public and all are welcome.


Abstracts of the Talks:


Pretending Atomicity Based on a Common Locking Discipline

Scott Stoller,
SUNY at Stony Brook

Monday, April 23, at 4:30 in Moore 554

      When locks are used to protect some shared data structures following a common locking discipline, then, roughly speaking, it is safe to pretend that each thread executes atomically, i.e., without interruption by other threads, except at acquire operations and at accesses to unprotected shared data structures. Traditional methods that justify pretending atomicity require checking whether the locking discipline is followed in the original (fine-grained) system. In some contexts, this defeats the purpose of pretending atomicity. We show how to justify pretending atomicity by checking whether the locking discipline is followed in the coarse-grained system.


Formalizing Anonymity Using Group Principals

Paul Syverson,
Naval Research Laboratory

Monday, April 2, at 4:30 in Moore 554

      In this talk I will introduce the concept of a group principal, an entity that is composed of some number of ordinary atomic principals. I will discuss a number of different classes of group principals. These appear to be naturally useful for looking at security. I will describe an associated epistemic language and logic that can be used to abstractly represent, for example, threshold-group principals. The language can also represent adversaries that are more complex and often weaker than those typically assumed to have complete control and knowledge of the network. A sample application area of the language is the epistemic characterization of anonymity properties. Anonymity protection properties afforded by systems or protocols can be formulated in this language in terms of intruder knowledge about actions by group principals.


From the rules of logic to the logic of rules

Jean-Yves Girard,
CNRS Institut de Mathematiques de Luminy

Wednesday, March 7, 4:30 p.m.,
Class of 1949 Auditorium in Houston Hall (2nd floor)

      Prof. Girard's lecture will discuss some developments in logic and proof theory during the twentieth century. These issues will be investigated in relation to computer science. The lecture is aimed at a wide audience, including graduate and undergraduate students in mathematics, computer science, philosophy, linguistics, and cognitive science. The lecture will be given in English.
      Prof. Girard is one of the leading logicians in the world today. His contributions include the polymorphic lambda-calculus (system F), the theory of dilators, linear logic, and ludics. He is an associate member of the French Academy of Sciences. Prof. Girard's visit to Penn is jointly sponsored by the Penn Mathematics Department and the French Institute of Science and Technology.
      Prof. Girard is visiting Penn in conjunction with the 2001 Annual Meeting of the Association for Symbolic Logic, which will be held at Penn, March 10-13. Please see the web site http://www.ircs.upenn.edu/asl2001/ for further information.


Formalizing Security Requirements for Group Key Distribution Protocols

Catherine A. Meadows,
Naval Research Laboratory

Monday, Feb 26, at 4:30 in Moore 554

      Secure group protocols, intended for such applications as key distribution for secure multicast, have become of great practical interest in recent years. However, their security requirements are somewhat different from those of traditional pairwise key distribution protocols, and are not always easy to formalize. In this talk we describe our ongoing work in the formalization and analysis of the Group Domain of Interpretation (GDOI), a secure multicast protocol being developed by the MSec Working Group in the Internet Engineering Task Force, with particular attention to our work in developing a requirements specification using the NRL Protocol Analyzer Temporal Requirements Language (NPATRL). We show how our work in specifying and reasoning about these requirements is leading us to develop a calculus for building and analyzing security requirements for cryptographic protocols.


Safe Ambients: Controlling Interference in Ambients

Davide Sangiorgi

Monday, Feb 19, at 4:30 in Moore 554

      Two forms of interferences are individuated in Cardelli and Gordon's Mobile Ambients (MA): plain interferences, which are similar to the interferences one finds in CCS and pi-calculus; and grave interferences, which are more dangerous and may be regarded as programming errors. To control interferences, the MA movement primitives are modified. On the new calculus, the Safe Ambients (SA), a type system is defined that: controls the mobility of ambients; removes all grave interferences. Other advantages of SA are: a useful algebraic theory; programs sometimes more robust (they require milder conditions for correctness) and/or simpler. These points are illustrated on several examples.
      The talk is based on joint work with Francesca Levi (University of Pisa)


Authenticity by Typing in Security Protocols

Alan Jeffrey
DePaul University

Monday, Feb 12, at 5:15 in Moore 554

      Joint work with Andrew D. Gordon.
      Understanding the correctness of cryptographic authenticity protocols is a difficult problem. We propose a new method for verifying authenticity properties of protocols based on symmetric-key cryptography. First, we code the protocol in the spi calculus of Abadi and Gordon. Second, we specify authenticity properties expected of the protocol by annotating the code with correspondence assertions in the style of Woo and Lam. Third, we figure out types for the keys, nonces, and messages of the protocol. Fourth, we mechanically check that the spi calculus code is well-typed. We have a type safety result that ensures that any well-typed program is robustly safe. It is feasible to apply this method by hand to several well-known cryptographic protocols; hence the method is considerably more efficient than previous analyses of authenticity for the spi calculus. Moreover, the types for protocol data serve as informative documentation of how the protocol works. Our method led us to the identification of problems in an authentication protocol due to Woo and Lam, previously discovered by Abadi, Anderson and Needham.


Partially Additive Categories and
Fully Complete Models of Linear Logic


Esfandiar Haghverdi
University of Pennsylvania

Part 1
Monday, Jan 29, at 4:30 in Moore 554

Part 2
Monday, Feb 5, at 4:30 in Moore 554

      Traditional completeness theorems are with respect to provability, whereas full completeness is with respect to proofs. In a categorical setting this amounts to asking for the unique free functor from the free category representing the logic to the model category be full. Thus, full completeness establishes a tight connection between syntax and semantics compared to completeness. In this talk, we will present a class of new models for the multiplicative fragment of linear logic based on the partially additive categories (PAC) of Manes and Arbib. PAC's have become important structures in studying several aspects of iteration and fixed point theories, geometry of interaction and semantics of programming languages. We utilize two important constructions to construct models of multiplicative linear logic based on PACs, namely the Int construction of Joyal, Street and Verity and double glueing construction of Loader, Hyland and Tan. Finally, we prove full completeness for these models. The semantic framework for the latter result is the proofs-as-dinatural transformations paradigm of Bainbridge-Freyd-Scedrov-Scott.
      The talk consists of two parts, in Part I we will mainly introduce the basic mathematical tools and notions of importance for our results and in Part II we will focus on the presentations of the main results.


Multiset Rewriting and the Complexity of Bounded Protocols

Andre Scedrov
University of Pennsylvania

Part 1
Monday, Dec 4, at 4:30 in Moore 554

Part 2
Monday, Dec 11, at 4:30 in Moore 554

      Most current formal approaches to the analysis of cryptographic protocols use the same basic model of adversary capabilities, which appears to have developed from positions taken by Needham and Schroeder and a model presented by Dolev and Yao. In this idealized setting, a protocol adversary is allowed to nondeterministically choose among possible actions. Messages are composed of indivisible abstract values, not sequences of bits, and encryption is modeled in an idealized way. Adversary may only send messages comprised of data it "knows" as the result of overhearing previous messages.
      Perhaps the simplest interpretation of protocol transitions is to consider them as a form of rewriting, so that protocol execution could be carried out symbolically. In addition to rewriting to effect state transitions, one also needs a way to choose new values, such as nonces or keys. While this seems difficult to achieve directly in standard rewriting formalisms, the logical inference rules associated with existential quantification appear to be just what is required. Therefore, we have adopted a notation that may be regarded as an extension of multiset rewriting with existential quantification. This formalism is closely related to the so-called existential Horn fragment of linear logic.
      Using this formalism, it is shown that protocol security is undecidable even when rather severe restrictions are placed on protocols. In particular, even if data constructors, message depth, message width, number of distinct roles, role length, and depth of encryption are bounded by constants, secrecy is an undecidable property. If protocols are further restricted to have no new data (nonces), then secrecy is DEXPTIME-complete. If, in addition, the number of roles is bounded, then secrecy is NP-complete. Hardness is obtained by encoding decision problems from existential Horn theories without function symbols into our protocol framework. The way that encryption and adversary behavior are used in the reduction shed some light on protocol analysis.
      This is joint work with Cervesato, Durgin, Lincoln, and Mitchell (12-th IEEE Computer Security Foundations Workshop, 1999).


The Church-Rosser Technique and
a High-School Proof for Euler's Partition Theorem.


Max Kanovich
University of Pennsylvania

Monday, Nov 27, at 4:30 in Moore 554

      This topic has been arisen in my 500 course ``Programming Languages and Techniques''. It has been inspired by the homework problem by Peter Buneman - that of writing down a recurrence relation for the number of partitions of $n$ in which

Notwithstanding the different recurrence relations, the Euler's generating functions provide us with the same numbers of odd partitions and distinct partitions for a given $n$.
      The next question is to find an explicit bijection between partitions into odd parts and partitions into distinct parts.
      Kathy O'Hara came up with the bijective proof based on the following 'local' transformations:
    1+1->2, 2+2->4, 3+3->6, 4+4->8, 5+5->10, ...
For instance,
 3+3+1+1+1+1 -> 3+3+2+1+1 -> 6+2+1+1 -> 6+2+2 -> 6+4
and, inversely,
    6+4 -> 3+3+4 -> 3+3+2+2 -> 3+3+2+1+1 -> 3+3+1+1+1+1
The key to her approach is that the mapping that it produces is an actual total bijection, in particular, it is independent of the order in which the transformations are chosen - ``but this requires a good deal of effort to prove'' (cited from Herbert Wilf's ``Lectures on Integer Partitions'')
      For a big variety of partition problems, I will show a direct and transparent proof of that we get the desired bijection:

Automata for Event Monitoring

Karthikeyan Bhargavan,
University of Pennsylvania

Monday, Nov 20, at 4:30 in Moore 554

      We consider the problem of {\it monitoring} an interactive device, such as an implementation of a network protocol, in order to check whether its execution is consistent with its specification. At first glance, it appears that a monitor could simply follow the input-output trace of the device and check it against the specification. However, if the monitor is able to observe inputs and outputs only from a vantage point {\it external} to the device---as is typically the case---the problem becomes surprisingly difficult. This is because events may be buffered, and even lost, between the monitor and the device, in which case, even for a correctly running device, the trace observed at the monitor could be inconsistent with the specification.
      In this paper, we formulate the problem of external monitoring as a {\em language recognition problem}. Given a specification that accepts a certain language of input-output sequences, we define another language that corresponds to input-output sequences observable externally. We also give an algorithm to check membership of a string in the derived language. It turns out that without any assumptions on the specification, this algorithm may take unbounded time and space. To address this problem, we define a series of properties of device specifications or protocols that can be exploited to construct efficient language recognizers at the monitor. We characterize these properties and provide complexity bounds for monitoring in each case.
      To illustrate our methodology, we describe properties of the Internet Transmission Control Protocol (TCP), and identify features of the protocol that make it challenging to monitor efficiently.
      Joint work with Satish Chandra, Pete McCann, Carl Gunter.


Fault Origin Adjudication

Karthikeyan Bhargavan,
University of Pennsylvania

Monday, Nov 13, at 4:30 in Moore 554

      It is generally accepted that non-trivial computer programs will have faults. It is also well known that faults can derive from each of several stages of the software engineering lifecycle. For instance, a program P may deviate from its detailed specification S. But it is also possible that the specification S was incorrect because it failed to ensure high-level user requirements R. In this paper we introduce a technique for automated analysis to determine which of these two possibilities obtains, assuming that high-level requirements have been properly expressed and a deviation from them has been found. We call this process Fault Origin Adjudication (FOA).
      To see a characteristic example, suppose a development project is implementing a standard specification S of a communication protocol. This protocol is expected to have a property R, but testing of the program reveals that the program fails to satisfy R for some test input W. Clearly the problem needs to be repaired, but the way it needs to be repaired depends on the origin of the fault. If the program does not conform to the standard, then this may be the cause of the failure: the program should be revised to conform. This conformance should ensure interoperability with other implementations of S. Moreover, the design of S was probably intended to guide the implementor to a program that satisfies R. However, if the standard does not ensure the requirement, then the standard specification may be the origin of the difficulty. In the worst case, no conformant implementation of the standard will satisfy the property R. In a less extreme case, there is a risk that some implementations will conform with the standard but not satisfy R, leading to potential failures. Thus, if the fault lies with S, then the matter needs to be referred to the standards body and a revision of S should be made.
      We demonstrate how the SPIN Model Checking Environment can be used along-side a trace generation utility like NS, to carry out Fault Origin Adjudication for Network Protocols.
      Joint work with Carl Gunter, Davor Obradovic.


Verisim: Formal Analysis of Network Simulations

Carl A. Gunter,
University of Pennsylvania

Monday, Nov 6, at 4:30 in Moore 554

      Network protocols are often analyzed using simulations. We demonstrate how to extend such simulations to check propositions expressing safety properties of network event traces in an extended form of linear temporal logic. Our technique uses the NS simulator together with a component of the Java MaC system to provide a uniform framework. We demonstrate its effectiveness by analyzing simulations of the Ad Hoc On-Demand Distance Vector (AODV) routing protocol for packet radio networks. Our analysis finds violations of significant properties, and we discuss the faults that cause them. Novel aspects of our approach include modest integration costs with other simulation objectives such as performance evaluation, greatly increased flexibility in specifying properties to be checked, and techniques for analyzing complex traces of alarms raised by the monitoring software.
      This is joint work with Karthikeyan Bhargavan, Moonjoo Kim, Insup Lee, Davor Obradovic, Oleg Sokolsky, and Mahesh Viswanathan.


Undecidability of Monadic Horn Theories
based on Linear and Affine Logics.


Max Kanovich
University of Pennsylvania

Monday, Oct 30, at 4:30 in Moore 554

      The Horn theories based on certain refined logical systems (such as linear logic and affine logic) have been proven to be adequate logical formalisms for AI planning problems, (timed) transition systems, protocol analysis, and, furthermore, game winning strategies.
      The Horn theories we are dealing with are defined by a finite number of Horn axioms of the form

X(x1,..,xn) |- \exists y1 y2..ym Y(x1,..,xn,y1,y2,..,ym)
where $X$ and $Y$ are products of positive atomic formulas.
      As for the propositional case, the pure Horn fragment of linear logic is known to be equivalent to the Petri nets, and, hence, it is decidable.
      The general first-order case is undecidable by means of the classical reasons of the undecidable first-order classical logic.
      As for the monadic case where we confine ourselves to the unary predicate symbols and unary functional symbols, contrary to what might have been expected (due to Buchi, Rabin), we prove the following undecidability results: (Here "Horn theory $T$ is undecidable" is taken in a strong "reachability" sense: there is no algorithm, which for any Horn sequent determines whether or not this Horn sequent is derivable from the axioms of $T$ with the help of the rules of the underlying logical system)

Automatic Abstraction through Syntactic Program Transformations

Kedar Namjoshi
Bell Labs

Monday, Oct 23, at 4:30 in Moore 554

      I will present an algorithm that constructs a finite-state abstract program from a given, possibly infinite-state, concrete program by means of syntactic program transformations. Starting with an initial set of predicates from a specification, the algorithm iteratively computes the predicates required to produce an abstraction relative to that specification. These predicates are represented by boolean variables in the finite-state abstract program.
      We show that the method is sound, in that the abstract program is always guaranteed to simulate the original. We also show that the method is complete, in that, if the concrete program has a finite abstraction with respect to simulation (bisimulation) equivalence, the algorithm can produce a finite simulation-equivalent (bisimulation-equivalent) abstract program. Syntactic abstraction has two key advantages: it can be applied to infinite state programs or to programs with large data domains, and it permits the effective application of other reduction methods for model checking. We show that our method generalizes several known algorithms for analyzing syntactically restricted, data-insensitive programs.
(joint work with Bob Kurshan, Bell Labs)


A Reference Model for Requirements and Specifications

Carl A. Gunter,
University of Pennsylvania

Monday, Oct 16, at 4:30 in Moore 554

      I will describe some new results on modeling software engineering artifacts using the WRSPM reference model. The first goal is to give a precise characterization of the refinement conditions for turning a set of requirements into a program via a set of specifications. The key issue is transitivity, ensuring that if the program satisfies the software specification and the specification reduces the user requirements to the language of the programmers, then the program implements the requirements. The second goal is to translate the conditions of the WRSPM model into the form of the Functional Documentation (FD) model of Parnas and Madey.
      This model, also known as the "Four Variables Model" has been used for the analysis of avionic systems and nuclear power plants. The translation of WRSM shows gaps in the FD model which allow the conditions of the model to be satisfied by an inconsistent refinement. We show the flaw and show how it is addressed by the translation of the WRSPM conditions.
      This is joint work with Elsa L. Gunter, Michael Jackson, and Pamela Zave.


Relating Cryptography and Polymorphism

Eijiro Sumii
University of Pennsylvania

Monday, Sep 25, at 4:30 in Moore 554

      Cryptography is information hiding. Polymorphism is also information hiding. So is cryptography polymorphic? Is polymorphism cryptographic?
      To investigate these questions, we define the _cryptographic lambda-calculus_, a simply typed lambda-calculus with shared-key cryptographic primitives. Although this calculus is simply typed, it is powerful enough to encode recursive functions, recursive types, and dynamic typing. We then develop a theory of relational parametricity for our calculus as Reynolds did for the polymorphic lambda-calculus. This theory is useful for proving equivalences in our calculus; for instance, it implies the non-interference property: values encrypted by a key cannot be distinguished from one another by any function ignorant of the key. We close with an encoding of the polymorphic lambda-calculus into the cryptographic calculus that uses cryptography to protect type abstraction.
      Our results shed a new light upon the relationship between cryptography and polymorphism, and offer a first step toward extending programming idioms based on type abstraction (such as modules and packages) from the civilized world of polymorphism, where only well-typed programs are allowed, to the unstructured world of cryptography, where friendly programs must cohabit with malicious attackers.
      This is joint work with Benjamin Pierce.
A full paper is available at http://www.yl.is.s.u-tokyo.ac.jp/~sumii/pub/.


Advanced Module Systems: A Guide for the Perplexed

Benjamin Pierce
University of Pennsylvania

Monday, Sep 11, at 4:30 in Moore 554

      The past three decades have seen a plethora of language features for large-scale software composition. Some of these are fairly simple, others quite sophisticated. Each embodies an implicit claim that its particular combination of features is both necessary and sufficient for some problem domain. But there have been few attempts to compare module systems, or to explain the practical and theoretical issues that motivate the choices they embody. This can make it difficult to evaluate which features are really needed for a given setting, and which may be overkill.
      We offer here an overview of the design space of module systems, with reference to those of some well-known languages (especially the ML family and Java), and record the motivating examples and crucial design choices underlying some central features. In particular:

[Joint work with Bob Harper.]

Seminars from previous years

__________________________________________________
Comments about this page can be sent to kanovich@saul.cis.upenn.edu.