Technical Report Abstracts



DHCP++: Applying an efficient implementation method for fail-stop cryptographic protocols

William A. Arbaugh, Angelos D. Keromytis and Jonathan M. Smith

The DHCP protocol is used by hosts to dynamically allocate an IP address and configure client hosts. The protocol greatly eases the administration of an IP subnetwork and is thus widely used.

The basic approach of the DHCP protocol is for a client to broadcast a request for an address, and for one or more servers to respond with addresses. This creates significant opportunities for security risks due to active attackers.

We have designed a new, efficient implementation method for the fail-stop cryptographic protocols originated by Gong and Syverson. The implementation method uses cryptographic hashes of the state of the sender and receiver and the exchanged messages to detect if any eviation from expected behavior has taken place. If it has, an attack is assumed and the protocol ceases execution. We present a proof outline of protocol security using our method.

We have applied our method to DHCP, resulting in DHCP++. DHCP++ uses our fail-stop implementation technique to prevent any attacks that could otherwise violate DHCP's security assumptions. The paper analyzes the threats eliminated by this enhancement, and measurements against DHCP show that the incremental performance costs are minimal.


Performance Implications of Securing Active Networks

D. Scott Alexander, William A. Arbaugh, Angelos D. Keromytis and Jonathan M. Smith

Security is an obvious risk to active networking, as increased flexibility creates numerous opportunities for mischief. The point at which this flexibility is exposed, e.g., through the loading of code into network elements, must therefore be carefully crafted to ensure security.

The Secure Active Network Environment (SANE) architecture provides a secure bootstrap process resulting in a module loader/ packet execution environment. As a set of nodes bootstrap, they exchange certificates to permit secure module exchange.

This paper demonstrates that SANE, while exhibiting performance degradation relative to unsecured operation, is able to perform acceptably. We include measurements comparing the loading of an active ping on a secure versus an insecure infrastructure.


Firewalls for Active Networks

Angelos D. Keromytis, Matt Blaze, John Ioannidis, and Jonathan M. Smith

Firewalls have been in use in the Internet for a number of years. They are deployed in companies, university campuses, Internet Service Providers and a variety of other network environments. The main reasons for their proliferation have been ease of administration and their role as single point of policy enforcement.

Active Networks drastically change the network infrastructure and communication primitives. Because of their increased flexibility, they also introduce new security risks. Furthermore, a traditional transparent firewall in such an environment can not effectively enforce the policies set by the administrator, since packets are now difficult to trivially analyze because of their active nature.

In this paper we propose a shift in the role of a firewall in such an environment. Instead of acting as policy enforcers, firewalls will now have to act as policy specifiers. The switches and hosts in the internal of the protected network will have to enforce these policies. We examine ways of deciding the access policies and of subsequently enforcing them. We describe an architecture based on the PolicyMaker system and the SwitchWare active network architecture.


On-the-fly Programmable Hardware for Networks

Ilija Hadzic and Jonathan M. Smith

Ongoing research in adaptive protocols and active networks has presumed that flexibility is offered exclusively through software systems, and the performance implications have generated considerable skepticism. The Programmable Protocol Processing Pipeline (P4) exploits the dynamic reconfigurability of RAM based Field Programmable Gate Arrays (FPGAs) to provide both hardware performance and dynamic functionality to network components.

We use forward error correction (FEC) as an example of a protocol processing function. Our measurements show that the P4 is able to process the data stream at OC-3 (155 Mbps) link rate, and consequently improve TCP performance in noisy environments.


Types and Intermediate Representations

Michael Hicks

The design objectives and the mechanisms for achieving those objectives are considered for each of three systems, Java, Erlang, and TIL. In particular, I examine the use of types and intermediate representations in the system implementation. In addition, the systems are compared to examine how one system's mechanisms may (or may not) be applied to another.


Updating Complex Value Databases

H. Liefke and S. Davidson

Query languages and their optimizations have been a very important issue in the database community. Languages for updating databases, however, have not been studied to the same extent, although they are clearly important since databases must change over time. The structure and expressiveness of updates is largely dependent on the data model. In relational databases, for example, the update language typically allows the user to specify changes to individual fields of a subset of a relation that meets some selection criterion. The syntax is terse, specifying only the pieces of the database that are to be altered. Because of its simplicity, most of the optimizations take place in the internal processing of the update rather than at the language level. In complex value databases, the need for a terse and optimizable update language is much greater, due to the deeply nested structures involved.

Starting with a query language for complex value databases called the Collection Programming Language (CPL), we describe an extension called CPL+ which provides a convenient and intuitive specification of updates on complex values. CPL is a functional language, with powerful optimizations achieved through rewrite rules. Additional rewrite rules are derived for CPL+ and a notion of ``deltafication'' is introduced to transform complete updates, expressed as conventional CPL expressions, into equivalent update expressions in CPL+. As a result of applying these transformations, the performance of complex updates can increase substantially.


Exploratory Aspects of Sensor Based Planning

Andrew Hicks, David Petty
In sensor based planning exploration is unavoidable. To understand this aspect of sensor based planning, the authors consider the problem of motion planning for a point with ``tactile sensors''. In dimensions greater than two, this problem has been shown to be unsolvable given a certain mathematical framework. But, if the formulation of the problem is changed by taking the $C$-space to be discrete, then path planning with tactile sensors is possible. In this setting we give a resolution complete algorithm for planning the motion of a point in any dimension. Measuring the complexity of the problem by the number of discrete moves that the robot makes, we give an upper bound for the complexity of our algorithm that is linear in the surface area of the boundary of the $C$-space obstacles.


Robust Invariants from Functionally Constratined Motion

R. Andrew Hicks, Kostas Daniilidis, Ruzena Bajcsy and David Pettey
We address in the problem of control-based recovery of robot pose and the environmental lay-out. Panoramic sensors provide us with an 1D projection of characteristic features of a 2D operation map. Trajectories of these projections contain the information about the position of a priori unknown landmarks in the environment. We introduce here the notion of spatiotemporal signatures of projection trajectories. These signatures are global measures, like area, characterized by considerably higher robustness with respect to noise and outliers than the commonly applied point correspondence. By modeling the 2D motion plane as the complex plane we show that by means of complex analysis our method can be embedded in the well-known affine reconstruction paradigm.


Alternating-time Temporal Logic

Rajeev Alur, Thomas A. Henzinger, and Orna Kupferman

Temporal logic comes in two varieties: linear-time temporal logic assumes implicit universal quantification over all paths that are generated by system moves; branching-time temporal logic allows explicit existential and universal quantification over all paths. We introduce a third, more general variety of temporal logic: alternating-time temporal logic offers selective quantification over those paths that are possible outcomes of games, such as the game in which the system and the environment alternate moves. While linear-time and branching-time logics are natural specification languages for closed systems, alternating-time logics are natural specification languages for open systems. For example, by preceding the temporal operator ``eventually'' with a selective path quantifier, we can specify that in the game between the system and the environment, the system has a strategy to reach a certain state. Also the problems of receptiveness, reaizability, and controllability can be formulated as model-checking problems for alternating-time formulas.

Depending on whether we admit arbitrary nesting of selective path quantifiers and temporal operators, we obtain the two alternating-time temporal logics ATL and ATL*. We interpret the formulas of ATL and ATL* over alternating transition systems. While in ordinary transition systems, each transition corresponds to a possible step of the system, in alternating transition systems, each transition corresponds to a possible move in the game between the system and the environment. Fair alternating transition systems can capture both synchronous and asynchronous compositions of open systems. For synchronous systems, the expressive power of ATL beyond CTL comes at no cost: the model-checking complexity of synchronous ATL is linear in the size of the system and the length of the formula. The symbolic model-checking algorithm for CTL extends with few modifications to synchronous ATL, and with some work, also to asynchronous ATL, whose model-checking complexity is quadratic. This makes ATL an obvious candidate for the automatic verification of open systems. In the case of ATL*, the model-checking problem is closely related to the synthesis problem for linear-time formulas, and requires doubly exponential time for both synchronous and asynchronous systems.


Timed Automata

Rajeev Alur

Model checking is emerging as a practical tool for automated debugging of complex reactive systems such as embedded controllers and network protocols (see [CK96] for a survey). In model checking, a high-level description of a system is compared against a logical correctness requirement to discover inconsistencies. Traditional techniques for model checking do not admit an explicit modeling of time, and are thus, unsuitable for analysis of real-time systems whose correctness depends on relative magnitudes of different delays. Consequently, timed automata [AD94] were introduced as a formal notation to model the behavior of real-time systems. Its definition provides a simple, and yet general, way to annotate state-transition graphs with timing constraints using finitely many real-valued clock variables. Automated analysis of timed automata relies on the construction of a finite quotient of the infinite space of clock valuations. Over the years, the formalism has been extensively studied leading to many results establishing connections to circuits and logic, and much progress has been made in developing verification algorithms, heuristics, and tools. This paper provides a survey of theory of timed automata, and their role in specification and verification of real-time systems.


Membership Questions for Timed and Hybrid Automata

Rajeev Alur, Robert P. Kurshan, and Mahesh Viswanathan

Timed and hybrid automata are extensions of finite-state machines for formal modeling of embedded systems with both discrete and continuous components. Reachability problems for these automata are well studied and have been implemented in verification tools. In this paper, for the purpose of effective error reporting and testing, we consider the membership problems for such automata. We consider different types of membership problems depending on whether the path (i.e. edge-sequence), or the trace (i.e. event-sequence), or the timed trace (i.e. timestamped event-sequence), is specified. We give comprehensive results regarding the complexity of these membership questions for different types of automata, such as timed automata and linear hybrid automata, with and without $\epsilon$-transitions.

In particular, we give an efficient O(n m^2) algorithm for generating timestamps corresponding a path of length n in a timed automaton with m clocks. This algorithm is implemented in the verifier COSPAN to improve its diagnostic feedback during timing verification. Second, we show that for automata without $\epsilon$-transitions, the membership question is NP-complete for different types of automata whether or not the timestamps are specified along with the trace. Third, we show that for automata with $\epsilon$-transitions, the membership question is as hard as the reachability question even for timed traces: it is PSPACE-complete for timed automata, and undecidable for slight generalizations.


Ordered Choice Diagrams for Symbolic Analysis

Rajeev Alur

We propose ordered choice diagrams (OCD) for symbolic representation of boolean functions. An OCD is a nondeterministic variant of an ordered binary decision diagram (BDD) with appropriate reduction rules. The introduction of nondeterminism destroys canonicity, but affords significant succinctness. While OCD s have efficient algorithms for union, intersection, existential quantifier elimination, and emptiness test, the equivalence problem for OCD s is coNP-complete. We show that symbolic model checking can still be performed efficiently by replacing equivalence test with a stronger equality test. We report on a prototype implementation and preliminary results that indicate the advantage of OCD-based representation in reachability analysis of distributed protocols.


Symbolic Exploration of Transition Hierarchies

Rajeev Alur, Thomas A. Henzinger, and Sriram K. Rajamani

In formal design verification, successful model checking is typicaly preceded by a laborious manual process of constructing design abstractions. We present a methodology for partially---and in some cases, fully---bypassing the abstraction process. For this purpose, we provide to the designer abstraction operators which, if used judiciously in the description of a design, structure the corresponding state space hierarchically. This structure can then be exploited by verification tools, and makes possible the automatic and exhaustive exploration of state spaces that would otherwise be out of scope for existing model checkers. Specifically, we present the following contributions:

  1. A temporal abstraction operator that aggregates transitions and hides intermediate steps. Mathematically, our abstraction operator is a function that maps a flat transition system into a two-level hierarchy where each atomic upper-level transition expands into an entire lower-level transition system. For example, an arithmetic operation may expand into a sequence of bit operations.
  2. A BDD-based algorithm for the symbolic exploration of multi-level hierarchies of transition systems. The algorithm traverses a level-n transition by expanding the corresponding level-(n-1) transition system on-the-fly. The level-n successors of a state are determined by computing a level-(n-1) reach set, which is then immediately released from memory. In this fashion, we can exhaustively explore hierarchically structured state spaces whose flat counterparts cause memory overflows.
  3. We experimentally demonstrate the efficiency of our method with three examples---a multiplier, a cache coherence protocol, and a multiprocessor system. In the first two examples, we obtain significant improvements in run times and peak BDD sizes over traditional state-space search. The third example cannot be model checked at all using conventional methods (without manual abstractions), but can be analyzed fully automatically using transition hierarchies.


Deciding Global Partial-Order Properties

Rajeev Alur, Ken McMillan, and Doron Peled

Model checking of asynchronous systems is traditionally based on the interleaving model, where an execution is modeled by a total order between events. Recently, the use of partial order semantics that allows independent events of concurrent processes to be unordered is becoming popular. Temporal logics that are interpreted over partial orders allow specifications relating global snapshots, and permit reduction algorithms to generate only one representative linearization of every possible partial-order execution during state-space search. This paper considers the satisfiability and the model checking problems for temporal logics interpreted over partially ordered sets of global configurations. For such logics, only undecidability results have been proved previously. In this paper, we present an Expspace decision procedure for a fragment that contains an eventuality operator and its dual. We also sharpen previous undecidability results, which used global predicates over configurations. We show that although our logic allows only local propositions (over events), it becomes undecidable when adding some natural until operator.


Model Checking of Hierarchical State Machines

Rajeev Alur and Mihalis Yannakakis

Model checking is emerging as a practical tool for detecting logical errors in early stages of system design. We investigate the model checking of hierarchical (nested) systems, i.e. finite state machines whose states themselves can be other machines. This nesting ability is common in various software design methodologies and is available in several commercial modeling tools. The straightforward way to analyze a hierarchical machine is to flatten it (thus, incurring an exponential blow up) and apply a model checking tool on the resulting ordinary FSM. We show that this flattening can be avoided. We develop algorithms for verifying linear time requirements whose complexity is polynomial in the size of the hierarchical machine. We address also the verification of branching time requirements and provide efficient algorithms and matching lower bounds.


Interaction between Path and Type Constraints

Peter Buneman, Wenfei Fan and Scott Weinstein

Path constraints have been proposed for semistructured data to generalize integrity constraints that are found in traditional database management systems. In particular, implication problems have been investigated for two forms of path constraints, namely, a class of word constraints [S. Abiteboul & V. Vianu, PODS'97] and a path constraint language Pc [P. Buneman, W. Fan & S. Weinstein, PODS'98]. Semistructured data is usually represented as an edge-labeled graph, unconstrained by any schema or type system. The question we address in this paper is what happens when a type system is imposed on semistructured data. One is tempted to think that adding structure simplifies reasoning about path constraints. Surprisingly, this is not the case. We show that there is a fragment of Pc whose associated implication and finite implication problems are decidable in PTIME, but are undecidable in the presence of type constraint. In addition, we also show that in other cases, imposing structure can have the expected effect of simplifying reasoning. In summary, we provide two dozen results on path constraint implication in a variety of database contexts. These results demonstrate that, in general, results developed for semistructured data may no longer hold when a type is imposed on the data, and vice versa.


A New Metric for Object Pose Estimation

Jeffrey Mendelsohn

Object pose estimation is a difficult task due to the non-linearities of the projection process; specifically with regard to the effect of depth. To overcome this complication, most algorithms use an error metric which removes the effect of depth. Recently, two new algorithms have been proposed based upon iteratively improving pose estimates obtained with weak-perspective or paraperspective approximations of the projection equations. A simple technique for improving the estimates of the two projection approximation algorithms is presented and a new metric is proposed for use in `polishing' these object pose estimates. At all distances, the new algorithm reduces the estimated orientation error by over ten percent. At short distances, the orientation improvement is about seventeen percent and the position error is reduced by twelve percent.


A Family of Securable Protection Systems

J. S. Shapiro, S. Weber


Inference Rules for Nested Functional Dependencies

Carmem S. Hara, Susan B. Davidson


Signalling Protocol for P4 (SPP4), version 1.0

Ilija Hadzic
This document describes SPP4 v.1.0, a signalling protocol implemented on Program mable Protocol Processing Pipeline (P4) platform. Protocol has been implemented on P4 v.1. It is expected that P4 v.2 will use a modified version of this protocol to support new features.

Current version of the signalling protocol allows P4 boards to synchronize their actitivies. Protocol Boosters model is supported and it is assumed that the mechanism modules are already available at each of the P4 enhanced nodes. Providing the mechanism modules from by the end user is not supported yet.

The signalling protocol is specified to support non-transparent boosters (i.e. boosters that require deboosters). If only transparent boosters are used (i.e. those that can operate without debooster), some of the features may be unnecessary. The existing implementation can work with transparent boosters but it will have more overhead than necessary.

This document covers protocol specification, interaction between signalling protocol and policy modules and software architecture of current implementation. The purpose of this document is to explain, not to formally specify the SPP4.


ASL recognition Based on a Coupling between HMMs and 3D Motion Analysis

Christian Vogler and Dimitris Metaxas
We present a framework for recognizing isolated and continuous American Sign Language (ASL) sentences from three-dimensional data. The data are obtained by using physics-based three-dimensional tracking methods and then presented as input to Hidden Markov Models (HMMs) for recognition. To improve recognition performance, we model context-dependent HMMs and present a novel method of coupling three-dimensional computer vision methods and HMMs by temporally segmenting the data stream with vision methods. We then use the geometric properties of the segments to constrain the HMM framework for recognition. We show in experiments with a 53 sign vocabulary that three-dimensional features outperform two-dimensional features in recognition performance. Furthermore, we demonstrate that context-dependent modeling and the coupling of vision methods and HMMs improve the accuracy of continuous ASL recognition.


Symbolic Weak Bisimulation for Value-passing Calculi

Hee-Hwan Kwak, Jin-Young Choi, Insup Lee, Anna Philippou
Bisimulation equivalence has proved to be a useful notion for providing semantics to process calculi, and it has been an object of rigorous study within the concurrency theory community. Recently there have been significant efforts for extending bisimulation techniques to encompass process calculi that allow the communication of data, that is value-passing calculi. A main challenge in achieving this has been the fact that in value-passing processes, where the values are from an infinite domain, checking for process equivalence involves comparing an infinite number of behaviors. Central among work in this area is the approach in which a new notion of process semantics, namely {\em symbolic semantics}, is proposed as a means of expressing a class of value-passing processes as finite graphs, although their traditional underlying graphs are infinite. This paper presents new algorithms for computing symbolic weak bisimulation for value-passing processes. These algorithms are built on the basis of a new formulation of late weak bisimulation which we propose. Unlike the traditional definition, this formulation allows the use of a single symbolic weak transition relation for expressing both early and late bisimulations. In addition to being easy to implement, our algorithms highlight the distinction between early and late weak bisimulations. Furthermore, we propose a simple variation of symbolic transition graphs with assignment proposed by Lin, in which the order of assignments and actions in transitions is exchanged. We believe that the resulting graphs are easier to construct and more intuitive to understand.


A heterogenous framework for query optimization

Alin Deutsch, Lucian Popa, and Val Tannen
We present an internal framework that generalizes and unifies concepts and techniques from both the relational and the object-oriented paradigms for the representation and optimization of queries. This addresses some of the optimization problems that arise in heterogenous mediator systems with multiple layers of views. The framework relies on the synergy between a small number of basic ideas: - a uniform treatment of collection restructuring and aggregation, - using dictionary datatypes, - representing integrity constraints as equations, - the special and ubiquitous role of one equivalence law---"idemloop". The effectiveness of the framework is illustrated through uniform treatments of standard relational algebra rewriting, tableau minimization, semijoin manipulation, transformation between pointer chasing and joins over extents, use of referential integrity constraints, of key constraints, and of inverse relationship constraints. The framework gives a new, logical level treatment to some physical level techniques such as use of indexes, access support relations and other precomputed structures. Putting it all together, we give new approaches to using constraints and precomputed information in optimizing and evaluating queries that span distributed heterogenous data sources.


A PC Chase

Lucian Popa and Val Tannen
PC stands for path-conjunctive, the name of a class of queries and dependencies that we define over complex values with dictionaries. This class includes the relational conjunctive queries and embedded dependencies, as well as many interesting examples of complex value and oodb queries and integrity constraints. We show that some important classical results on containment, dependency implication, and chasing extend and generalize to this class.


PLAN System Security

Michael Hicks
PLAN, the Packet Language for Active Networks, decribes a distributed programming environment which we have used to build an Active Network, PLANet. Security is obtained in PLANet from two sources: resource-limited active packets written in PLAN, and authenticated, router-resident service routines written in OCaml. Up until now, security in PLANet has been conceptual, rather than actual. This paper presents a security architecture for PLANet, and a series of additions and changes to implement it.


Power Management in Mobile Computing (A Survey)

Sanjay Udani and Jonathan Smith

Rapid advances in technology have resulted in laptop (mobile) computers with performance and features comparable to desktop (stationary) machines. Advances in rechargeable battery technology have failed to keep pace, decreasing the usefulness of mobile computers and portable wireless devices.

Several methods of power management can be used to prolong the battery life of a mobile computer. We provide a detailed analysis of power consumption typically encountered in a networked laptop computer and the power management methods currently used. We also outline some novel proposed power management methods.


Computer Animation 1998 Proceedings

Various authors' contributions are included in this technical report.
(CA`98 was organized by HMS, IEEE Computer Society, and The Computer Graphics Society.)
The 1998 Computer Animation Conference Proceedings include papers that span the areas of virtual agents, human and facial animation, physics-based animation, morphing and computer vision-based animation.


Mobile Code Security Techniques

Jonathan T. Moore
This paper presents a survey of existing techniques for achieving mobile code security, as well as a representative sampling of systems which use them. In particular, the problem domain is divided into two portions: protecting hosts from malicious code; and protecting mobile code from malicious hosts. The discussion of the malicious code problem includes a more in-depth study of the Java security model, as well as touching upon several other systems. The malicious host problem, however, is much more difficult to solve, so our discussion is mostly restricted to ongoing research in that area.


An Execution Model for CPL+

Hartmut Liefke and Susan B. Davidson

Query languages and their optimizations have been a very important issue in the database community. Languages for updating databases, however, have not been studied to the same extent, although they are clearly important since databases must change over time. While update languages for the relational model are quite simple and provide little opportunity for optimizations, update languages for complex value databases turn out to be more challenging. CPL+ is a language for updating complex value databases, based on primitive values, records, variants, and collections. The syntax of CPL+ is concise in that only the parts of the database that change are specified. Furthermore, powerful optimization rules are available.

In this report, we show the correctness of the optimization rules. Furthermore, we describe an execution model for CPL+ together with the underlying abstract storage model for a complex value database. We develop a formal framework for describing the workspace of updates - i.e. the set of physical objects that are accessed or updated within an update. Based on this notion, we analyse optimizations and present a selection of proofs that show that the rewriting rules are indeed cost reducing.

MS-CIS-98-30 (IRCS-98-17)

Where To Look? Automating Some Visual Attending Behaviors of Human Characters (Ph.D. Dissertation Proposal)

Sonu Chopra
We propose a method for automatically generating the appropriate attentional (eye gaze or looking) behavior for virtual characters existing or performing tasks in a dynamically changing environment. Such behavior is expected of human-like characters but is usually tedious to animate and often not specified at all as part of the character's explicit actions. In our system, referred to as the AVA (Automated Visual Attending), users enter a list of motor or cognitive actions as input in text format: walk to the lamp post, monitor the traffic light, reach for the box, etc. The system generates the appropriate motions and automatically generates the corresponding attentional behavior. The resulting gaze behavior is produced not only by considering the explicit queue of required tasks, but also by factoring in involuntary visual functions known from human cognitive behavior (attentional capture by exogenous factors, spontaneous looking), the environment being viewed, task interactions, and task load. This method can be adapted to eye and head movement control for any facial model.


CoPa: a Parallel Programming Language for Collections

Dan Suciu, Val Tannen

In this paper we propose a new framework for parallel processing of collections. We define a high-level language called CoPa for processing nested sets, bags, and sequences (a generalization of arrays and lists). CoPa includes most features found in query languages for object-oriented or object-relational databases, and has, in addition, a powerful form of recursion not found in query languages. CoPa has a formal declarative definition of parallel complexity, as part of its specification. We prove the existence of a complexity-preserving compilation for CoPa, i.e. one which offers upper-bound guarantees for the parallel complexity of the compiled code. The majority of the compilation process is architecture-independent, using a parallel vector machine model (BVRAM). The BVRAM instructions form a sequence-algebra which is of independent interest, and have been carefully chosen to reconcile two conflicting demands: supporting the complexity-preserving compilation of CoPa's high-level constructs, and efficient implementability on a variety of architectures. The latter allows us to establish comparisons with some of the parallel algorithms work through a provably optimal implementation of the BVRAM on butterfly networks. In targeting more practical architectures we use the LogP model. Here we prove that monotone data communications admit optimal implementations on the LogP model, and use that to implement the BVRAM efficiently. Finally, we tested the feasibility of our entire approach to compilation by running some experiments on a LogP simulator. Their goal is to compare, for both speedup and scaleup, the three components of the total running time: the data communication cost, the control communication cost, and the local computations.


Equality, Type and Word Constraints

Peter Buneman, Wenfei Fan, Scott Weinstein

As a generalization of inclusion dependencies that are found in relational databases, word constraints have been studied for semistructured data as well as for an object-oriented model. In both contexts, equality relation is simply treated as the first-order logic equality, and the decidability of the implication and finite implication problems for word constraints has been established. A question left open is whether these problems are still decidable in the context of an object-oriented model M which supports complex values with nested structures and complex value equality. This paper provides an answer to that question. We characterize a schema in M in terms of a type constraint and an equality constraint, and investigate the interaction between these constraints and word constraints. We show that in the presence of equality and type constraint, the implication and finite implication problems for word constraints are also decidable, by giving a small model argument.


Path Constraints on Deterministic Graphs

Peter Buneman, Wenfei Fan, Scott Weinstein

We study path constraints for deterministic graph model, a variation of semistructured data model in which data is represented as a rooted edge-labeled directed graph with deterministic edge relations. The path constraint languages considered include the class of word constraints, the language Pc investigated in [PODS98], and an extension of Pc by allowing regular expressions in the place of paths. Complexity results on the implication and finite implication problems for these constraint languages are established.


Chase and Axioms for PC Queries and Dependencies

Lucian Popa and Val Tannen

This is a companion technical report for the authors' paper "An equational chase for path-conjunctive queries, constraints, and views" that has appeared in the Proceedings of ICDT'99.


Generating Effective Instructions: Knowing When to Stop"

Juliet C. Bourne

One aspect of Natural Language generation is describing entities so that they are distinguished from all other entities. Entities include objects, events, actions, and states. Much attention has been paid to objects and the generation of their referring expressions (descriptions meant to pick out or refer to an entity). However, a growing area of research is the automated generation of instruction manuals and an important part of generating instructions is distinguishing the actions that are to be carried out from other possible actions. One distinguishing feature is an action's termination, or when the performance of the action is to stop. My dissertation work focuses on generating action descriptions from action information using the SPUD generation algorithm developed here at Penn by Matthew Stone. In my work, I concentrate on the generation of expressions of termination information as part of action descriptions. The problems I address include how termination information is represented in action information and expressed in Natural Language, how to determine when an action description allows the reader to understand how to perform the action correctly, and how to generate the appropriate description of action information.


Generation, Estimation and Tracking of Faces

Douglas M. DeCarlo

This thesis describes techniques for the construction of face models for both computer graphics and computer vision applications. It also details model-based computer vision methods for extracting and combining data with the model. Our face models respect the measurements of populations described by face anthropometry studies. In computer graphics, the anthropometric measurements permit the automatic generation of varied geometric models of human faces. This is accomplished by producing a random set of face measurements generated according to anthropometric statistics. A face fitting these measurements is realized using variational modeling. In computer vision, anthropometric data biases face shape estimation towards more plausible individuals. Having such a detailed model encourages the use of model-based techniques---we use a physics-based deformable model framework. We derive and solve a dynamic system which accounts for edges in the image and incorporates optical flow as a motion constraint on the model. Our solution ensures this constraint remains satisfied when edge information is used, which helps prevent tracking drift. This method is extended using the residuals from the optical flow solution. The extracted structure of the model can be improved by determining small changes in the model that reduce this error residual. We present experiments in extracting the shape and motion of a face from image sequences which exhibit the generality of our technique, as well as provide validation.


A Framework for Run-time Correctness Assurance of Real-Time Systems

Moonjoo Kim, Mahesh Viswanathan, Hanene Ben-Abdallah, Sampath Kannan, Insup Lee, and Oleg Sokolsky

We describe the Monitoring and Checking (MaC) framework which provides assurance on the correctness of program execution at run-time. Our approach complements the two traditional approaches for ensuring that a system is correct, namely static analysis and testing. In contrast to these approaches, which try to ensure that all possible executions of the system are correct, our approach only ensures that the current execution of the system is correct. The MaC architecture consists of three components: a filter, an event recognizer, and a run-time checker. The filter extracts low-level information, such as values of program variables and function calls, from the system code, and sends it to the event recognizer. From this low-level information, the event recognizer detects the occurrence of "abstract" requirements events, and informs the run-time checker about them. The run-time checker uses these events to check that the current system execution conforms with the formal requirement specification for the system. This paper overviews our current prototype implementation, which uses JAVA as the implementation language, and languages to express monitoring scripts and requirements specifications.


Image-Based Ventricular Blood Flow Analysis

Timothy N. Jones

We describe a novel and non-invasive method for the quantitative analysis of blood flow in the left ventricle of specific human patients using information derived from medical images. There are three major components to the method. First, a new approach to the segmentation problem is presented which locates the ventricle (or other organs of interest) in medical images. The method is independent of the imaging modality used (for example, MR, CT, and Ultrasound) and is automatic, requiring as initialization a single point within the interior of the ventricle. Existing segmentation techniques either require much more information during initialization, such as an approximation to the boundary of an object, or are not robust to the types of noisy data encountered in the medical domain. By integrating region-based and physics-based modeling techniques we have devised a hybrid design that overcomes these limitations. In our experiments we demonstrate, across imaging modalities, that this integration automates and significantly improves the boundary detection results. Next, a technique known as MRI-SPAMM is applied to extract the full 3D motion of the ventricular walls. This technique applies a magnetic grid within the heart tissue which deforms along with the tissue through the cardiac cycle and appears in MR images. By tracking the deformation in the images using a physics-based model, the wall motion can be quantified. Finally, the ventricular wall motion information is used by an efficient computational fluid dynamics solver to simulate the flow of blood through the ventricle. Boundary conditions for the solver are directly derived from the wall motion information, which allows for the first time a patient-specific LV blood flow simulation. We present experiments using data from both normal and diseased subjects and compare our results with other techniques for estimating ventricular blood flow.

Send comments/questions to