Penn Logic and Computation Seminar 2001-2002
logic and computation group
is composed of faculty and graduate students from the
computer and information science,
departments, and participates in the
Institute for Research in the Cognitive Sciences.
The logic and computation group runs a weekly seminar.
The seminar meets regularly during the school year
on Mondays at 4:31 p.m.
in room 4C8 of the David Rittenhouse Laboratory
at the University of Pennsylvania.
Any changes to this schedule will be specifically noted.
The seminar is open to the public and all are welcome.
Loyola University, Chicago
Wednesday, June 12, 2002, 4:00pm in Moore GRW 554
Abstraction-based Model Checking using Modal Transition Systems
Abstraction based methods extend the applicability of model checking to
programs written in general-purpose programming languages by extracting
abstract (finite state) models from the source program. Such methods are
generally unsound since abstraction usually introduces unrealistic
This talk develops methods to perform automatic abstraction in such a way
that it yields verification results whose soundness can be guaranteed. Our
framework is based on modal transition systems (MTSs) to allow the sound
transfer of properties and counter-examples (for mu-calculus properties)
from the abstract program to the concrete program. MTSs arise naturally in
three value program analysis and have robust semantic foundations based on a
mixed powerdomain of states.
We describe algorithms to automatically generate an abstract MTS by adapting
existing predicate and cartesian abstraction techniques, and algorithms for
model checking abstract MTSs. These algorithms and can be implemented in
combination with existing abstraction techniques without incurring
significant computational overhead. The accuracy of verification results
is improved by the use of Generalized model checking, a technique to
improve the precision of reasoning about partial state spaces of concurrent
The talk is based on joint work with Godefroid, Huth and Schmidt.
University of Pennsylvania
Monday, April 22, 2002, at 4:31 pm in DRL 4C8
Bijective Proofs in Combinatorics
- by Two-Directional Rewriting Techniques
The novelty of our approach to the combinatorics is in the use of
rewriting techniques for the purpose of establishing explicit
bijections between combinatorial objects of two different types.
One basic activity in combinatorics is to establish combinatorial
identities by so-called `bijective proofs,' which consist in
constructing explicit bijections between two types of the
combinatorial objects under consideration.
We show how such bijective proofs can be established, and how the
bijections are computed by means of multiset rewriting, for a variety
of combinatorial problems involving partitions.
In particular, we fully characterizes all equinumerous partition
ideals with `disjointly supported' complements. As a corollary, a new
proof, the 'bijective' one, is given for all equinumerous classes of
the partition ideals of order 1 from the classical book ``The Theory
of Partitions'' by G.Andrews.
Establishing the required bijections involves novel two-directional
reductions in the sense that forward and backward application of
rewrite rules head for two different normal forms (representing the
two combinatorial types).
It is well-known that non-overlapping multiset rules are confluent. As
for termination, it generally fails even for multiset rewriting
systems that satisfy certain natural balance conditions (imposed by
the essence of the combinatorial problem). The main technical
development of the paper, which is important for establishing that the
mapping yielding the combinatorial bijection is functional, is that
the `restricted' two-directional strong normalization holds for these
multiset rewriting systems.
Aaron D. Jaggard
University of Pennsylvania
Monday, April 1, 2002, at 4:31 pm in DRL 4C8
A formal analysis of some properties of Kerberos 5 using MSR
We present initial results of an ongoing analysis (joint work with
F. Butler, I. Cervesato, and A. Scedrov) of Kerberos 5 using the
MultiSet Rewriting (MSR) framework.
Kerberos 5 is intended to allow the repeated authentication of a
client to multiple servers using a single login. We give MSR
formalizations of this protocol at two levels of detail. This
indicates the utility of the MSR framework in analyzing real world
protocols and allows us to prove, in our abstract formalization,
some authentication properties of this protocol. In the process, we
find some anomalous behavior in the protocol. Our proofs of
authentication properties use notions of rank and corank inspired
by Schneider, and may suggest reasoning methods which are well
suited to MSR.
University of Pennsylvania
Monday, March 18, 2002, at 4:31 pm in DRL 4C8
Advantage and abuse-freeness in contract-signing protocols
Distributed contract signing over a network involves many challenges
in mimicking the features of paper contract signing. For instance, a
paper contract is usually signed by both parties at the same time and
at the same place, but distributed electronic transactions over a
network is inherently asymmetric in that someone has to send the first
Several digital contract-signing protocols have been devised in order
to overcome this basic asymmetry and to achieve symmetric properties
such as fairness, namely: 1) if nothing goes wrong, both participants
receive a valid contract, 2) every participant may complete the
protocol, and 3) either both participants receive a valid contract or
neither one does. Such protocols often involve a trusted third party
that can enforce the contract after it witnesses a partial completion
of the protocol. In optimistic contract-signing protocols the trusted
third party is contacted only in case of a dispute, otherwise the
protocol can be completed without involving the third party. Such
protocols involve several subprotocols that allow a contract to be
signed normally or aborted or resolved by the trusted third party.
Another kind of symmetry desirable in distributed contract signing was
identified by Garay, Jakobsson, and MacKenzie: a fair protocol is said
to be abuse-free if, at any stage of the protocol, it is impossible
for any participant, say A, to be able to prove to an outside
challenger that A has the power to choose between completing the
contract and aborting it. A formal analysis of the Garay-Jakobsson-
MacKenzie two-party contract-signing protocol was carried out by
Mitchell and Shmatikov using a finite state verification tool. They
showed that negligence of the trusted third party may lead to loss of
abuse-freeness or fairness. Based on their analysis, Mitchell and
Shmatikov also suggested a revision of the protocol. This revised
version is our reference point.
We study an approximation of the abuse-freeness property, namely, at
any stage of a fair protocol, any protocol participant does not have
both the power to complete the contract as well as the power to abort
it. As noted by Mitchell and Shmatikov, this property is not trace-
based. We use a multiset-rewriting formalism for protocol analysis to
formally state this property in terms of a certain recursive property
of the protocol execution tree, which we then prove by inductive
methods. Our proof relies on a strong notion of fairness adopted from
which itself we formally state in the multiset-rewriting formalism and
prove by inductive methods. We also show that our approximation of
abuse- freeness, even though it is not a trace-based property, it may
nevertheless be represented in by means of provability in a logical
system, in the multiplicative additive fragment of linear logic, in
which formal derivations correspond to full protocol execution trees
and vice versa. This work was carried out in collaboration with Max
Kanovich and Andre Scedrov.
We also discuss current work in progress with John Mitchell, Andre
Scedrov, and Vitaly Shmatikov. We strengthen the definition of
abuse-freeness in two ways. We consider strategies against honest
participants interested in completing the contract and whose actions
display a bias toward a positive outcome. We suggest a formulation of
"evidence to an outside participant" by means of epistemic logic.
Monday, March 4, 2002, at 4:31 pm in DRL 4C8
The Stable Paths Problem as a Model of BGP Routing
Inter-domain routing in the Internet is currently implemented
with the Border Gateway Protocol (BGP). This protocol
allows every autonomous administrative domain to define local
routing policies that are consistent with its own interests. Such
locally defined policies can interact in ways that cause various
kinds of global routing anomalies --- nondeterministic routing and
protocol divergence ("live lock") --- and these anomalies can
span multiple autonomous domains making them very difficult to
debug and correct. Analysis of BGP is complicated by the fact that
route messages carry a number of semantically rich attributes that
play a role in the BGP route selection algorithm and in the implementation
of local routing policies. The talk will begin with a short tutorial on
BGP, and so no previous knowledge will be assumed (don't panic!).
I'll present a simple static semantics for BGP called the Stable Paths
Problem (SPP). This formalism can be viewed as a simple kind of game or
as a generalization of the stable matching problem. I'll argue that the
BGP protocol can be modeled as a distributed algorithm that attempts to
solve instances of the Stable Paths Problem. If the Stable Paths Problem
has no solution, the protocol will not converge. The protocol can also
diverge when there is a solution but it gets "trapped" down a blind alley.
Nondeterministic routing is associated with Stable Path Problems that
have multiple solutions. I'll also show that several interesting questions
related to SPP (and BGP) solvability are NP-complete, and I'll describe a
recent extension to SPP needed to model BGP's ability to do "cold potato
I'll survey joint work that I 've done with Lixin Gao (UMASS),
Jennifer Rexford (AT&T Research) , F. Bruce Shepherd (Bell Labs), and
Gordon Wilfong (Bell Labs). Papers can be found at
An extended BGP tutorial can be found at
Benjamin C. Pierce,
University of Pennsylvania
Monday, February 18, 2002, at 4:31 pm in DRL 4C8
Native XML processing in a statically typed language:
- A progress report on the Xtatic design
The recent rush to adopt XML can be attributed in part to the hope
that the static typing provided by DTDs (or more sophisticated
mechanisms such as XML-Schema) will improve the robustness of data
exchange and processing. However, although XML _documents_ can be
checked for conformance with DTDs, current XML processing languages
offer no way of verifying that _programs_ operating on XML structures
will always produce conforming outputs.
In previous work, we have designed and implemented a domain-specific
language for XML processing, called XDuce. The main novelties of XDuce
1) A type system based on REGULAR EXPRESSION TYPES. Regular
expression types are a natural generalization of DTDs, describing
structures in XML documents using regular expression operators (*,
?, |, etc.) and supporting a powerful form of subtyping.
2) A corresponding mechanism for REGULAR EXPRESSION PATTERN
MATCHING, which supports concise "grep-style" patterns for
extracting information from inside structured sequences.
The lessons learned from XDuce are now being incorporated in a new
language, called Xtatic, whose design focuses on smooth integration of
these novel XML-processing features into mainstream, object-oriented
languages such as C#. The current vision is that Xtatic will be
engineered as a lightweight extension to C#, offering native support
for regular expression types and patterns and completely interoperable
at the binary level with ordinary C# programs and APIs.
The talk will describe the basic design principles of Xtatic, the
technical issues that have been addressed so far (in particular, the
integration of regular expression types with the type structures of
objects and classes), and the issues still requiring work.
University of Pennsylvania
Monday, February 11, 2002, at 4:31 pm in DRL 4C8
A probabilistic polynomial-time calculus
for analysis of cryptographic protocols
We describe properties of a process calculus that has been developed
for the purpose of analyzing security protocols. The process calculus
is a restricted form of pi-calculus, with bounded replication and
probabilistic polynomial-time expressions allowed in messages and
boolean tests. In order to avoid problems expressing security in the
presence of nondeterminism, messages are scheduled probabilistically
instead of nondeterministically. We prove that evaluation may be
completed in probabilistic polynomial time and develop properties of a
form of asymptotic protocol equivalence that allows security to be
specified using observational equivalence, a standard relation from
programming language theory that involves quantifying over possible
environments that might interact with the protocol. We also relate
process equivalence to cryptographic concepts such as pseudo-random
number generators and polynomial-time statistical tests. The work
has been carried out in collaboration with P. Lincoln, J. Mitchell,
M. Mitchell, A. Ramanathan, and V. Teague.
Monday, Jan 28, DRL 4N30, 4pm
On the continuity of the roots: old and new
We will show how the classical theorem on the continuity of the roots
of algebraic functions in one variable can be reduced via some
``easy'' model theory to an ``obvious fact''. This also leads to a
vast generalisation of the result in discussion. Some questions about
higher dimensional variants will be also asked/discussed.
Stevens Institute of Technology
Monday, December 10, 2001, at 4:31 pm in DRL 4C8
A Heap-Bounded Assembly Language
The pioneering work of Necula and Lee on Proof-Carrying Code motivated
recent developments in typed intermediate languages and typed assembly
languages that enforce various security policies during code
In this talk we present ongoing joint work with David Aspinall on the
development of a typed assembly language for safe memory space reuse,
to be used in a Proof Carrying Code environment.
We present a typed assembly language, HBAL, which has a type system
with features for managing space usage. In contrast to other typed
assembly languages, HBAL allows memory areas to be reused for values
of differing types. We ensure type safety by using a type system with
linearity constraints to prevent aliasing of heap locations, together
with special pseudo-instructions for altering types at isolated points
in the code. The reuse discipline is controlled by compilation from a
high-level language, and we demonstrate a compilation from a
prototypical first-order functional language, LFPL, due to Hofmann.
University of Pennsylvania,
Monday, November 26, 2001, at 4:31 pm in DRL 4C8
Signed binary expansions
Workshop on the best known method for "infinite precision" real
arithmetic and its connection with the construction of the reals
as a co-inductive type.
Monday, November 19, 2001, at 4:31 pm in DRL 4C8
The Girard-Reynolds Isomorphism
The second-order polymorphic lambda calculus, F2, was
independently discovered by Girard and Reynolds. Girard
additionally proved a representation theorem: every function
on natural numbers that can be proved total in second-order
intuitionistic propositional logic, P2, can be represented in F2.
Reynolds additionally proved an abstraction theorem: for a suitable
notion of logical relation, every term in F2 takes related arguments
into related results. We observe that the essence of Girard's result
is a projection from P2 into F2, and that the essence of Reynolds's
result is an embedding of F2 into P2, and that the Reynolds embedding
followed by the Girard projection is the identity.
The Girard projection
discards all first-order quantifiers, so it seems unreasonable to expect
that the Girard projection followed by the Reynolds embedding should
also be the identity. However, we show that in the presence of Reynolds's
parametricity property that this is indeed the case, for propositions
corresponding to inductive definitions of naturals, products, sums,
and fixpoint types.
University of Pennsylvania
Monday, November 5, 2001, at 4:31 pm in DRL 4C8
Monday, November 12, 2001, at 4:31 pm in DRL 4C8
Undecidability of the optimistic protocol completion
for a `monadic' class of communication protocols
One could claim that the undecidability of the problem of
optimistic protocol completion in the absence of an intruder
is not so interesting, since the standard security protocol
settings include some model of the intruder, who, for instance,
can control the network, and can duplicate and delete messages
(but not the state of protocol participants).
But, many of the approaches for analyzing and reasoning about
protocols based on certain formal specification languages,
and, prior to analysis of protocol properties related to
unreliable broadcast, we have to show that
our formal specification of a given protocol meets
the protocol rules and conventions at least under ideal conditions
when no one interferes with network transmission.
I will show that the optimistic protocol completion is undecidable
even for the class of protocols with two participants such that
either of them is a finite automaton provided with one register
to store one atomic message, and no compound messages are in the use.
Monday, October 29, 2001, at 4:31 pm in DRL 4C8
An Approach to Model Theory for Finite Structures
This is joint work in progress with Dugald Macpherson (Leeds). The
point is to develop a model theory for classes of finite structures
that bears a similarity to contemporary model theory for classes of
infinite structures, in which some notion of dimension typically plays
a central role. We thus refer to the classes we study as, "classes of
finite structures with dimension and measure." Our definition of
dimension and measure is directly inspired by the paper, "Definable
sets over finite fields," by Chatzidakis, van den Dries, and Macintyre
(Crelle 1992). Only the most basic familiarity with model theory will
Contact Scott Weinstein (firstname.lastname@example.org) for an
appointment with Prof Steinhorn.
University of Pennsylvania,
Monday, October 22, 2001, at 4:30 pm in DRL 4C8
Coinduction and real analyis
Physicists know how to integrate over all possible paths, computer-
vision experts want to assign probabilities to arbitrary scenes, and
numerical analysts act as if some continuous functions are more
typical than others. In these three disparate cases, a more flexible
notion of integration is being invoked than is possible in the
traditional foundations for mathematics. If allowed to enter a highly
speculative mode, such as the intersection of category theory and
computer science, we may bump into some solutions to the problem.
IST - Universidade Tecnica de Lisboa
Monday, October 15, 2001, at 4:31 pm in DRL 4C8
An Algebraic Approach to Combining Stochastic Systems
Combining transition systems can be easily achieved with limits,
subobjects and fibrations. However, applying this dogma to stochastic
systems leads to several hurdles. In fact, the suitable notion of
morphism between probability spaces fails to be closed under
composition, and hence, we obtain a non-category!
After covering the notions of weak categories in the iterature, we end
up with Freyd's framework on paracategories. Then, we recover the
notion of limit and fibration via adjunctions. This is achieved in
two different ways, both capitalizing on the cartesian closed category
of paracategories. On one hand, we generalize Lawvere's comma
construction and define adjunction through isomorphism. On the other
hand, based on the monoid classifier Delta, we internalize
paracategories, and then consider the notion of enriched-adjunction
which ensues. The latter reformulation can be interpreted in any
category which admits the construction of a bicategory of partial
maps. We conclude by applying the established results to the fibred
paracategories of stochastic systems.
University of Pennsylvania
Monday, October 1, 2001, at 4:30 pm in DRL 4C8
Language Support for Program Generation:
Semantics, Implementation, and Applications
This talk will present my dissertation work, which develops
programming languages and associated techniques for sound and
efficient implementations of algorithms for program generation.
First, we develop a framework for practical two-level languages. In
this framework, we demonstrate that two-level languages are not
only a good tool for describing program-generation algorithms, but a
good tool for reasoning about them and implementing them as well.
We pinpoint several general properties of two-level languages that
capture common proof obligations of program-generation
- To prove that the generated program behaves as desired, we use an
erasure property to reduce the two-level proof obligation to a simpler
one-level proof obligation.
- To prove that the generated program satisfies certain syntactic
constraints, we use a type-preservation property for a refined type
system that enforces these constraints.
In addition, to justify concrete implementations, we use a native
embedding of a two-level language into a one-level language.
We present two-level languages with these properties both for a
call-by-name object language and for a call-by-value object
language with computational effects, and demonstrate them through
two classes of non-trivial applications: one-pass transformations
into continuation-passing style and type-directed partial evaluation
for call-by-name and for call-by-value.
Next, to facilitate implementations, we develop several general
approaches to programming with type-indexed families of values
within the popular Hindley-Milner type system. Type-indexed
families provide a form of type dependency, which is employed by
many algorithms that generate typed programs, but is absent from
mainstream languages. Our approaches are based on type encodings,
so that they are type safe. We demonstrate and compare them
through a host of examples, including type-directed partial
evaluation and printf-style formatting.
Finally, upon the two-level framework and type-encoding
techniques, we recast a joint work with Bernd Grobauer, where we
formally derived a suitable self application for type-directed partial
evaluation, and achieved automatic compiler generation.
University of Pennsylvania,
Monday, September 24, 2001, at 4:30 pm in DRL 4C8
Primitive recursion as iteration;
general recursion as generalized iteration
BRICS, University of Arhus
Wednesday, September 19, at 4:30 in DRL 4C6
Proof Mining: uncovering the computational content of proofs
PLEASE NOTE SPECIAL PLACE AND TIME
With the term `proof mining' we denote the activity of transforming a
prima facie non-constructive proof into a new one from which certain
computational information can be read off which was not visible
beforehand. Already in the 1950's Georg Kreisel realized that logical
techniques from proof theory -- originally developed for foundational
purposes -- can be put to use here. In recent years, a more systematic
proof theoretic approach to proof mining in numerical analysis emerged
yielding new quantitative (and even qualitative) numerical
results in approximation theory and fixed point theory and providing
a bridge between mathematical numerical analysis and the area of
computability (and complexity) in analysis which has mainly been
developed by logicians (and complexity theorists).
Although proof mining has been applied also to e.g. number theory and
combinatorics, the area of numerical (functional) analysis is
of particular interest since here non-effectivity is at the core of many
principles (like compactness arguments) which are used to ensure
convergence. In mathematical terms this non-computability
often is an obstacle to obtain a quantitative stability analysis
and rates of convergence.
We will give a survey and discuss two recent applications concerning
1) New uniform bounds for theorems of Ishikawa and Borwein-Reich-Shafrir
on the asymptotic regularity of non-expansive mappings, which yield
even new qualitative results, and
2) Fully explicit description of the rate of strong
unicity for best L_1-approximations of continuous functions by
polynomials (joint work with Paulo Oliva).
Comments about this page can be sent to