The DHCP protocol is used by hosts to dynamically allocate an IP address and configure client hosts. The protocol greatly eases the administration of an IP subnetwork and is thus widely used.
The basic approach of the DHCP protocol is for a client to broadcast a request for an address, and for one or more servers to respond with addresses. This creates significant opportunities for security risks due to active attackers.
We have designed a new, efficient implementation method for the fail-stop cryptographic protocols originated by Gong and Syverson. The implementation method uses cryptographic hashes of the state of the sender and receiver and the exchanged messages to detect if any eviation from expected behavior has taken place. If it has, an attack is assumed and the protocol ceases execution. We present a proof outline of protocol security using our method.
We have applied our method to DHCP, resulting in DHCP++. DHCP++ uses our fail-stop implementation technique to prevent any attacks that could otherwise violate DHCP's security assumptions. The paper analyzes the threats eliminated by this enhancement, and measurements against DHCP show that the incremental performance costs are minimal.
Security is an obvious risk to active networking, as increased flexibility creates numerous opportunities for mischief. The point at which this flexibility is exposed, e.g., through the loading of code into network elements, must therefore be carefully crafted to ensure security.
The Secure Active Network Environment (SANE) architecture provides a secure bootstrap process resulting in a module loader/ packet execution environment. As a set of nodes bootstrap, they exchange certificates to permit secure module exchange.
This paper demonstrates that SANE, while exhibiting performance degradation relative to unsecured operation, is able to perform acceptably. We include measurements comparing the loading of an active ping on a secure versus an insecure infrastructure.
Firewalls have been in use in the Internet for a number of years. They are deployed in companies, university campuses, Internet Service Providers and a variety of other network environments. The main reasons for their proliferation have been ease of administration and their role as single point of policy enforcement.
Active Networks drastically change the network infrastructure and communication primitives. Because of their increased flexibility, they also introduce new security risks. Furthermore, a traditional transparent firewall in such an environment can not effectively enforce the policies set by the administrator, since packets are now difficult to trivially analyze because of their active nature.
In this paper we propose a shift in the role of a firewall in such an environment. Instead of acting as policy enforcers, firewalls will now have to act as policy specifiers. The switches and hosts in the internal of the protected network will have to enforce these policies. We examine ways of deciding the access policies and of subsequently enforcing them. We describe an architecture based on the PolicyMaker system and the SwitchWare active network architecture.
Ongoing research in adaptive protocols and active networks has presumed that flexibility is offered exclusively through software systems, and the performance implications have generated considerable skepticism. The Programmable Protocol Processing Pipeline (P4) exploits the dynamic reconfigurability of RAM based Field Programmable Gate Arrays (FPGAs) to provide both hardware performance and dynamic functionality to network components.
We use forward error correction (FEC) as an example of a protocol processing function. Our measurements show that the P4 is able to process the data stream at OC-3 (155 Mbps) link rate, and consequently improve TCP performance in noisy environments.
The design objectives and the mechanisms for achieving those objectives are considered for each of three systems, Java, Erlang, and TIL. In particular, I examine the use of types and intermediate representations in the system implementation. In addition, the systems are compared to examine how one system's mechanisms may (or may not) be applied to another.
Query languages and their optimizations have been a very important issue in the database community. Languages for updating databases, however, have not been studied to the same extent, although they are clearly important since databases must change over time. The structure and expressiveness of updates is largely dependent on the data model. In relational databases, for example, the update language typically allows the user to specify changes to individual fields of a subset of a relation that meets some selection criterion. The syntax is terse, specifying only the pieces of the database that are to be altered. Because of its simplicity, most of the optimizations take place in the internal processing of the update rather than at the language level. In complex value databases, the need for a terse and optimizable update language is much greater, due to the deeply nested structures involved.
Starting with a query language for complex value databases called the Collection Programming Language (CPL), we describe an extension called CPL+ which provides a convenient and intuitive specification of updates on complex values. CPL is a functional language, with powerful optimizations achieved through rewrite rules. Additional rewrite rules are derived for CPL+ and a notion of ``deltafication'' is introduced to transform complete updates, expressed as conventional CPL expressions, into equivalent update expressions in CPL+. As a result of applying these transformations, the performance of complex updates can increase substantially.
Temporal logic comes in two varieties: linear-time temporal logic assumes implicit universal quantification over all paths that are generated by system moves; branching-time temporal logic allows explicit existential and universal quantification over all paths. We introduce a third, more general variety of temporal logic: alternating-time temporal logic offers selective quantification over those paths that are possible outcomes of games, such as the game in which the system and the environment alternate moves. While linear-time and branching-time logics are natural specification languages for closed systems, alternating-time logics are natural specification languages for open systems. For example, by preceding the temporal operator ``eventually'' with a selective path quantifier, we can specify that in the game between the system and the environment, the system has a strategy to reach a certain state. Also the problems of receptiveness, reaizability, and controllability can be formulated as model-checking problems for alternating-time formulas.
Depending on whether we admit arbitrary nesting of selective path quantifiers and temporal operators, we obtain the two alternating-time temporal logics ATL and ATL*. We interpret the formulas of ATL and ATL* over alternating transition systems. While in ordinary transition systems, each transition corresponds to a possible step of the system, in alternating transition systems, each transition corresponds to a possible move in the game between the system and the environment. Fair alternating transition systems can capture both synchronous and asynchronous compositions of open systems. For synchronous systems, the expressive power of ATL beyond CTL comes at no cost: the model-checking complexity of synchronous ATL is linear in the size of the system and the length of the formula. The symbolic model-checking algorithm for CTL extends with few modifications to synchronous ATL, and with some work, also to asynchronous ATL, whose model-checking complexity is quadratic. This makes ATL an obvious candidate for the automatic verification of open systems. In the case of ATL*, the model-checking problem is closely related to the synthesis problem for linear-time formulas, and requires doubly exponential time for both synchronous and asynchronous systems.
Model checking is emerging as a practical tool for automated debugging of complex reactive systems such as embedded controllers and network protocols (see [CK96] for a survey). In model checking, a high-level description of a system is compared against a logical correctness requirement to discover inconsistencies. Traditional techniques for model checking do not admit an explicit modeling of time, and are thus, unsuitable for analysis of real-time systems whose correctness depends on relative magnitudes of different delays. Consequently, timed automata [AD94] were introduced as a formal notation to model the behavior of real-time systems. Its definition provides a simple, and yet general, way to annotate state-transition graphs with timing constraints using finitely many real-valued clock variables. Automated analysis of timed automata relies on the construction of a finite quotient of the infinite space of clock valuations. Over the years, the formalism has been extensively studied leading to many results establishing connections to circuits and logic, and much progress has been made in developing verification algorithms, heuristics, and tools. This paper provides a survey of theory of timed automata, and their role in specification and verification of real-time systems.
Timed and hybrid automata are extensions of finite-state machines for formal modeling of embedded systems with both discrete and continuous components. Reachability problems for these automata are well studied and have been implemented in verification tools. In this paper, for the purpose of effective error reporting and testing, we consider the membership problems for such automata. We consider different types of membership problems depending on whether the path (i.e. edge-sequence), or the trace (i.e. event-sequence), or the timed trace (i.e. timestamped event-sequence), is specified. We give comprehensive results regarding the complexity of these membership questions for different types of automata, such as timed automata and linear hybrid automata, with and without $\epsilon$-transitions.
In particular, we give an efficient O(n m^2) algorithm for generating timestamps corresponding a path of length n in a timed automaton with m clocks. This algorithm is implemented in the verifier COSPAN to improve its diagnostic feedback during timing verification. Second, we show that for automata without $\epsilon$-transitions, the membership question is NP-complete for different types of automata whether or not the timestamps are specified along with the trace. Third, we show that for automata with $\epsilon$-transitions, the membership question is as hard as the reachability question even for timed traces: it is PSPACE-complete for timed automata, and undecidable for slight generalizations.
We propose ordered choice diagrams (OCD) for symbolic representation of boolean functions. An OCD is a nondeterministic variant of an ordered binary decision diagram (BDD) with appropriate reduction rules. The introduction of nondeterminism destroys canonicity, but affords significant succinctness. While OCD s have efficient algorithms for union, intersection, existential quantifier elimination, and emptiness test, the equivalence problem for OCD s is coNP-complete. We show that symbolic model checking can still be performed efficiently by replacing equivalence test with a stronger equality test. We report on a prototype implementation and preliminary results that indicate the advantage of OCD-based representation in reachability analysis of distributed protocols.
In formal design verification, successful model checking is typicaly preceded by a laborious manual process of constructing design abstractions. We present a methodology for partially---and in some cases, fully---bypassing the abstraction process. For this purpose, we provide to the designer abstraction operators which, if used judiciously in the description of a design, structure the corresponding state space hierarchically. This structure can then be exploited by verification tools, and makes possible the automatic and exhaustive exploration of state spaces that would otherwise be out of scope for existing model checkers. Specifically, we present the following contributions:
Model checking of asynchronous systems is traditionally based on the interleaving model, where an execution is modeled by a total order between events. Recently, the use of partial order semantics that allows independent events of concurrent processes to be unordered is becoming popular. Temporal logics that are interpreted over partial orders allow specifications relating global snapshots, and permit reduction algorithms to generate only one representative linearization of every possible partial-order execution during state-space search. This paper considers the satisfiability and the model checking problems for temporal logics interpreted over partially ordered sets of global configurations. For such logics, only undecidability results have been proved previously. In this paper, we present an Expspace decision procedure for a fragment that contains an eventuality operator and its dual. We also sharpen previous undecidability results, which used global predicates over configurations. We show that although our logic allows only local propositions (over events), it becomes undecidable when adding some natural until operator.
Model checking is emerging as a practical tool for detecting logical errors in early stages of system design. We investigate the model checking of hierarchical (nested) systems, i.e. finite state machines whose states themselves can be other machines. This nesting ability is common in various software design methodologies and is available in several commercial modeling tools. The straightforward way to analyze a hierarchical machine is to flatten it (thus, incurring an exponential blow up) and apply a model checking tool on the resulting ordinary FSM. We show that this flattening can be avoided. We develop algorithms for verifying linear time requirements whose complexity is polynomial in the size of the hierarchical machine. We address also the verification of branching time requirements and provide efficient algorithms and matching lower bounds.
Path constraints have been proposed for semistructured data to generalize integrity constraints that are found in traditional database management systems. In particular, implication problems have been investigated for two forms of path constraints, namely, a class of word constraints [S. Abiteboul & V. Vianu, PODS'97] and a path constraint language Pc [P. Buneman, W. Fan & S. Weinstein, PODS'98]. Semistructured data is usually represented as an edge-labeled graph, unconstrained by any schema or type system. The question we address in this paper is what happens when a type system is imposed on semistructured data. One is tempted to think that adding structure simplifies reasoning about path constraints. Surprisingly, this is not the case. We show that there is a fragment of Pc whose associated implication and finite implication problems are decidable in PTIME, but are undecidable in the presence of type constraint. In addition, we also show that in other cases, imposing structure can have the expected effect of simplifying reasoning. In summary, we provide two dozen results on path constraint implication in a variety of database contexts. These results demonstrate that, in general, results developed for semistructured data may no longer hold when a type is imposed on the data, and vice versa.
Object pose estimation is a difficult task due to the non-linearities of the projection process; specifically with regard to the effect of depth. To overcome this complication, most algorithms use an error metric which removes the effect of depth. Recently, two new algorithms have been proposed based upon iteratively improving pose estimates obtained with weak-perspective or paraperspective approximations of the projection equations. A simple technique for improving the estimates of the two projection approximation algorithms is presented and a new metric is proposed for use in `polishing' these object pose estimates. At all distances, the new algorithm reduces the estimated orientation error by over ten percent. At short distances, the orientation improvement is about seventeen percent and the position error is reduced by twelve percent.
Current version of the signalling protocol allows P4 boards to synchronize their actitivies. Protocol Boosters model is supported and it is assumed that the mechanism modules are already available at each of the P4 enhanced nodes. Providing the mechanism modules from by the end user is not supported yet.
The signalling protocol is specified to support non-transparent boosters (i.e. boosters that require deboosters). If only transparent boosters are used (i.e. those that can operate without debooster), some of the features may be unnecessary. The existing implementation can work with transparent boosters but it will have more overhead than necessary.
This document covers protocol specification, interaction between signalling protocol and policy modules and software architecture of current implementation. The purpose of this document is to explain, not to formally specify the SPP4.
Rapid advances in technology have resulted in laptop (mobile) computers with performance and features comparable to desktop (stationary) machines. Advances in rechargeable battery technology have failed to keep pace, decreasing the usefulness of mobile computers and portable wireless devices.
Several methods of power management can be used to prolong the battery life of a mobile computer. We provide a detailed analysis of power consumption typically encountered in a networked laptop computer and the power management methods currently used. We also outline some novel proposed power management methods.
Query languages and their optimizations have been a very important issue in the database community. Languages for updating databases, however, have not been studied to the same extent, although they are clearly important since databases must change over time. While update languages for the relational model are quite simple and provide little opportunity for optimizations, update languages for complex value databases turn out to be more challenging. CPL+ is a language for updating complex value databases, based on primitive values, records, variants, and collections. The syntax of CPL+ is concise in that only the parts of the database that change are specified. Furthermore, powerful optimization rules are available.
In this report, we show the correctness of the optimization rules. Furthermore, we describe an execution model for CPL+ together with the underlying abstract storage model for a complex value database. We develop a formal framework for describing the workspace of updates - i.e. the set of physical objects that are accessed or updated within an update. Based on this notion, we analyse optimizations and present a selection of proofs that show that the rewriting rules are indeed cost reducing.
In this paper we propose a new framework for parallel processing of collections. We define a high-level language called CoPa for processing nested sets, bags, and sequences (a generalization of arrays and lists). CoPa includes most features found in query languages for object-oriented or object-relational databases, and has, in addition, a powerful form of recursion not found in query languages. CoPa has a formal declarative definition of parallel complexity, as part of its specification. We prove the existence of a complexity-preserving compilation for CoPa, i.e. one which offers upper-bound guarantees for the parallel complexity of the compiled code. The majority of the compilation process is architecture-independent, using a parallel vector machine model (BVRAM). The BVRAM instructions form a sequence-algebra which is of independent interest, and have been carefully chosen to reconcile two conflicting demands: supporting the complexity-preserving compilation of CoPa's high-level constructs, and efficient implementability on a variety of architectures. The latter allows us to establish comparisons with some of the parallel algorithms work through a provably optimal implementation of the BVRAM on butterfly networks. In targeting more practical architectures we use the LogP model. Here we prove that monotone data communications admit optimal implementations on the LogP model, and use that to implement the BVRAM efficiently. Finally, we tested the feasibility of our entire approach to compilation by running some experiments on a LogP simulator. Their goal is to compare, for both speedup and scaleup, the three components of the total running time: the data communication cost, the control communication cost, and the local computations.
As a generalization of inclusion dependencies that are found in relational databases, word constraints have been studied for semistructured data as well as for an object-oriented model. In both contexts, equality relation is simply treated as the first-order logic equality, and the decidability of the implication and finite implication problems for word constraints has been established. A question left open is whether these problems are still decidable in the context of an object-oriented model M which supports complex values with nested structures and complex value equality. This paper provides an answer to that question. We characterize a schema in M in terms of a type constraint and an equality constraint, and investigate the interaction between these constraints and word constraints. We show that in the presence of equality and type constraint, the implication and finite implication problems for word constraints are also decidable, by giving a small model argument.
We study path constraints for deterministic graph model, a variation of semistructured data model in which data is represented as a rooted edge-labeled directed graph with deterministic edge relations. The path constraint languages considered include the class of word constraints, the language Pc investigated in [PODS98], and an extension of Pc by allowing regular expressions in the place of paths. Complexity results on the implication and finite implication problems for these constraint languages are established.
This is a companion technical report for the authors' paper "An equational chase for path-conjunctive queries, constraints, and views" that has appeared in the Proceedings of ICDT'99.
One aspect of Natural Language generation is describing entities so that they are distinguished from all other entities. Entities include objects, events, actions, and states. Much attention has been paid to objects and the generation of their referring expressions (descriptions meant to pick out or refer to an entity). However, a growing area of research is the automated generation of instruction manuals and an important part of generating instructions is distinguishing the actions that are to be carried out from other possible actions. One distinguishing feature is an action's termination, or when the performance of the action is to stop. My dissertation work focuses on generating action descriptions from action information using the SPUD generation algorithm developed here at Penn by Matthew Stone. In my work, I concentrate on the generation of expressions of termination information as part of action descriptions. The problems I address include how termination information is represented in action information and expressed in Natural Language, how to determine when an action description allows the reader to understand how to perform the action correctly, and how to generate the appropriate description of action information.
This thesis describes techniques for the construction of face models for both computer graphics and computer vision applications. It also details model-based computer vision methods for extracting and combining data with the model. Our face models respect the measurements of populations described by face anthropometry studies. In computer graphics, the anthropometric measurements permit the automatic generation of varied geometric models of human faces. This is accomplished by producing a random set of face measurements generated according to anthropometric statistics. A face fitting these measurements is realized using variational modeling. In computer vision, anthropometric data biases face shape estimation towards more plausible individuals. Having such a detailed model encourages the use of model-based techniques---we use a physics-based deformable model framework. We derive and solve a dynamic system which accounts for edges in the image and incorporates optical flow as a motion constraint on the model. Our solution ensures this constraint remains satisfied when edge information is used, which helps prevent tracking drift. This method is extended using the residuals from the optical flow solution. The extracted structure of the model can be improved by determining small changes in the model that reduce this error residual. We present experiments in extracting the shape and motion of a face from image sequences which exhibit the generality of our technique, as well as provide validation.
We describe the Monitoring and Checking (MaC) framework which provides assurance on the correctness of program execution at run-time. Our approach complements the two traditional approaches for ensuring that a system is correct, namely static analysis and testing. In contrast to these approaches, which try to ensure that all possible executions of the system are correct, our approach only ensures that the current execution of the system is correct. The MaC architecture consists of three components: a filter, an event recognizer, and a run-time checker. The filter extracts low-level information, such as values of program variables and function calls, from the system code, and sends it to the event recognizer. From this low-level information, the event recognizer detects the occurrence of "abstract" requirements events, and informs the run-time checker about them. The run-time checker uses these events to check that the current system execution conforms with the formal requirement specification for the system. This paper overviews our current prototype implementation, which uses JAVA as the implementation language, and languages to express monitoring scripts and requirements specifications.
We describe a novel and non-invasive method for the quantitative analysis of blood flow in the left ventricle of specific human patients using information derived from medical images. There are three major components to the method. First, a new approach to the segmentation problem is presented which locates the ventricle (or other organs of interest) in medical images. The method is independent of the imaging modality used (for example, MR, CT, and Ultrasound) and is automatic, requiring as initialization a single point within the interior of the ventricle. Existing segmentation techniques either require much more information during initialization, such as an approximation to the boundary of an object, or are not robust to the types of noisy data encountered in the medical domain. By integrating region-based and physics-based modeling techniques we have devised a hybrid design that overcomes these limitations. In our experiments we demonstrate, across imaging modalities, that this integration automates and significantly improves the boundary detection results. Next, a technique known as MRI-SPAMM is applied to extract the full 3D motion of the ventricular walls. This technique applies a magnetic grid within the heart tissue which deforms along with the tissue through the cardiac cycle and appears in MR images. By tracking the deformation in the images using a physics-based model, the wall motion can be quantified. Finally, the ventricular wall motion information is used by an efficient computational fluid dynamics solver to simulate the flow of blood through the ventricle. Boundary conditions for the solver are directly derived from the wall motion information, which allows for the first time a patient-specific LV blood flow simulation. We present experiments using data from both normal and diseased subjects and compare our results with other techniques for estimating ventricular blood flow.