Technical Report Abstracts


1996


MS-CIS-96-01

Berstein Polynomials for Radiative Transfer Computations

Min-Zhi Shao, Norman Badler

In this paper we propose using planar and spherical Berstain polynomials over angular domain for radiative transfer computations. In the planar domain, we propose using piecewise Bernstein basis functions and symmetric Gaussian quadrature formulas over triangular elements for high quality radiosity solution. In the spherical domain, we propose using piecewise Bernstein basis functions over a geodesic triangulation to represent the radiance function. The representation is intrinsic to the unit sphere, and may be efficiently stored, evaluated, and subdivided by the de Casteljau algorithm. The computation of other fundamental radiometric quantities such as vector irradiance and reflected radiance may be reduced to the integration of the piecewise Bernstein basis functions on the unit sphere. The key result of our work is a simple geometric intergration algorithm based on adaptive domain subdivision for the Bernstein-Bezier polynomials over a geodesic triangle on the unit sphere.


MS-CIS-96-02

Spherical Sampling by Archimedes' Theorem

Min-Zhi Shao, Norman Badler

In this paper we present a simple and efficient algorithm for generating uniformaly distributed samples on the unit sphere based on an Archimedes' theorem. The implementation is straightforward and may be easily extended to include stratified sampling for variance reduction. Applications in image synthesis include solid angle measurement, irradiance computation, and rendering equation solution for geometrically complex environments.


MS-CIS-96-05

Effecting Database Transformations Using Morphase

Susan B. Davidson, Anthony Kosky

Database transformations are a frequent problem for data managers supporting scientific databases, particulary those connected with the Human Genome Project. The databases involeved frequently contain complex data structures not typically found in conventional databases, such as arbitrarily nested records, sets, variants, and optional fields, as well as object identities and recursive data structures. Furthermore, programs implementing the transfomations must be frequently modified since the databases involved evolve rapidly, as often as 3 to 4 times a year. We present in this paper a language (WOL) for specifying transformations between such databases and describe its implementation in a system called Morphase. Optimization are performed at all stages, with significant impact on the compilation and execution time of sample transformations.


MS-CIS-96-06

Consistency Management in the EROS Kernel

Jonathan Shapiro, David J. Farber, Jonathan Smith

EROS is a persistent operating system targeted towards managing resources with great longevity. The system provides a persistent single level store supporting two fundamental object types: nodes and pages. All primary objects, including memory segments and protection domains, are constructed out of these fundamental objects, and inherit their persitence. EROS is a pure capability system: access to objects is provided exclusively through the invocation of kernal enforced, secure capabilities.

This paper descibes the EROS Abstract Machine and the mechanisms used to achieve efficient consistency management within the system. The implementation, including all primary objects, a low overhead checkpoint/ migration subsystem, and an efficient interprocess communication mechanism, requires less than 64 Kbytes of supervisor code (prior to size tuning).


MS-CIS-96-08

Connecting Planning and Acting via Object-Specific Reasoning (Dissertation)

Libby Levison

Instructions from a high level planner are in general too abstract for a behavioral simulator to execute. In this dissertation I describe an intermediate reasoning system-the OBJECT-SPECIFIC REASONER-which bridges the gap between high level task-actions and action directives of a behavioral system. It decomposes task-actions and derives parameter values for each action directive, thus enabling existing high level planners to instruct synthetic agents with the same task action commands that they currently produce. The OBJECT-SPECIFIC REASONER's architecture follows directly from the hypothesis that action representations are under specified descriptions, and that objects in the same functional category are manipulated in similar ways. The action representation and the object representation are combined to complete the action interpretation, thereby grounding plans in action. The OBJECT-SPECIFIC REASONER provides evidence that a small number of object functional categories, organized in a taxonomy, makes possible a simple and elegant reasoning system which converts task actions to action directives. To test the theory behind the Object-Specific Reassoner, I applied the implementation to three different 3D graphical domains.


MS-CIS-96-09

A Query Language and Optimization Techniques for Unstructured Data Techniques for Unstructured Data

Peter Buneman, Susan Davidson, Gerda Hillebrand, and Dan Suciu

A new kind of data model has recently emerged in which the database is not constrained by a conventional schema. Systems like ACeDB, which has become very popular with biologists, and the recent Tsimmis proposal for data integration organize data in tree -like structures whose components can be used equally well to represent sets and tuples. Such structures allow great flexibility in data representation.

What query language is appropriate for such structures? Here we propose a simple language UNQL for querying data organized as a rooted, edge-labeled graph. In this model, relational data may be represented as fixed-depth trees, and on such trees UNQL is equivalent to the relational algebra. The novelty of UNQL consists in its programming constructs for arbitrarily deep data and for cyclic structures. While strictly more powerful than query languages with path expressions like XSQL, UNQL can still be efficiently evaluated. We describe new optimization techniques for the deep or ``vertical'' dimension of UNQL queries. Furthermore, we show that known optimization techniques for operators on flat relations apply to the ``horizontal'' dimension of UNQL.


MS-CIS-96-10

A New Look at Bandwidth Latency Tradeoffs

John Shaffer, Jonathan Smith

Concerns about propagation delay have dominated the discussion of latency, bandwidth and their effect on distributed applications. In this paper, we argue that the relevant latency measure for applications is the Applicaiotn Data Unit (ADU) Latency, defined as the time between the sending of an ADU and its receipt. Since ADUs are often large, ADU latency is influenced by throughput as well as propagation delay.

We investigated the effects the effects od ADU latency with an experimental study of several applications. The applications used Distributed Shared Memory as an interprocess Comunications mechanism, constraining the ADUs to page sized units. The applications were run on an Ethernet, an experimental ATM LAN, and using ATM on an experimental high-speed WAN. The measured results were used to normalize results gathered by inserting an experimental ATM switch output port controller in the network to create tunable delays.

The results conclusively demonstrate the effect of ADU latency on distributed application response time. The experiments give a precise characterization of the effect of varinh bandwidth and propagation delay on a real system, and suggest promissing directions for further improving application performance in future networks.


MS-CIS-96-12

The Power Broker: Intelligent Power Management for Mobile Computers

Sanjay Udani, Jonathan Smith

Technology has improved processor speed and memory densities at exponetial rates. Rapid advances in portable computing have resulted in laptop computers with performance and features comparable to their desktop counterparts. Battery technology has failed to keep pace, decreasing the usefulness of mobile computers and portable wireless devices.

We provide a detailed analysis of power consumption typically encountered in a networked laptop computer and the power management methods currently used. We then show how interaction between independent power consumers results in inefficient use of energy resources and propose the Power Broker as a means for orchestrating energy use with the goal of wxtnding battery life. The Power Broker's resource management algorithms exploit an abundant resource (CPU power) to conserve one (battery energy).


MS-CIS-96-13

Operating System Support for Protocol Boosters

A. Mallet, J. D. Chung, J. M. Smith

``Protocol Boosters'' are modules inserted into Protocol graphs. They allow the protocol's behavior to adapt to its environment. Boosters can mask undesirable properties of links or subnets in an internetwork. The method permits use of proprietary protocols and supports endn-to-end optimizations.

We have implemented Protocol Boosters support in the FreeBSD version of UNIX for Intel architecture machines. Our prototype embeds boosters in the 4.4 BSD-Lite Internet protocol (IP) stack. We have measured the performance of two prototype boosters: an encryption booster (for passage across insecure subnets) and a compression booster (passage across bacdwidth-impaired subnets).

Our measurement data suggests that OS support for this method can be constructed with low performance overhead: execution of the protocol elements domimates any overhead introduction by our implementation. We discuss some lessons learned from implementation.


MS-CIS-96-14

Gigabit Networks

Jonathan Smith

This chapter summarizes what we have learned in the past decade of research into extremely high throughput networks. Such networks are colloquially referred to as ``Gigabit Networks'' in reference to the billion bit per second throughput regime they now operate in. The engineering challenges are in the integration of fast transmission systems and high-performance engineering workstations.


MS-CIS-96-15

Event Signaling within Higher Performance Network Systems

Jeffrey D. Chung, C. Brendan S. Traw, Jonathan Smith

The afterburner ATMlink Adapter has allowed us to evaluate three event-signaling schemes: polling, traditional interrupts and the clocked interrupts first investigated in our operating system work in AURORA. The schemes are evaluated in the context of a single-copy TCP/IP stack. The experimental results indicate that clocked interrupts can provide throughout comparable with traditional interrupts for dedicated machines (up to over 144 Mbps, the highest TCP/IP/ATM throughout reported), and better performance when the machines are loaded with an artificial workload. Polling, implemeted to be used with an unmodified netperf measurement tool, was competitive for small TCP/IP socket buffersizes (32KB). We concluded that clocked interrupts may be preferable for applications requiring high throughput on systems with heavy processing workloads, such as servers.


MS-CIS-96-18

Transforming Databases with Recursive Data Structures Dissertation)

Anthony Kosky

This thesis examines the problems of performing structural transformations on databases involving complex data-structures and object-identities, and proposes an approach to specifying and implementing transformations.

We start by looking at various applicationsof such database transformations, and at some of the more significant work in these areas. In particular we will look at work on transformations in the area of database integration, which has been one of the major motivating areas for this work. We will also look at various notions of correctness that have been proposed for database transformations, and show that the utility of such notions is limited by the dependence of transformations on certain implicit database constraints. We draw attention to the limitations of existing work on transformations, and argue that there is a need for a more general formalism for reasoning about database transformations and constraints.

We will also argue that, in order to ensure that database transformations are well-defined and meaningful, it is necessary to understand the information capacity of the data-models being transformed. To this end we give a thorough analysis of the information capacity of data-models supporting object identity, and will show that this is dependent on the operations supported by a query language for comparing object identities.

We introduce a declarative language, WOL, based on Horn-clause logic, for specifying database transformations and constraints. We also propose a method of implementing transformations specified in this language, by manipulating their clauses into a normal form which can then be translated into an underlying database programming language.

Finally we will present a number of optimizations and techniques necessary in order to build a practical implementation based on these proposals, and will discuss the results of some of the trials that were carried out using a prototype of such a system.


MS-CIS-96-19

Model-Based Shape and Motion Analysis: Left Ventricle of a Heart (Dissertation)

Jinah Park

The accurate and clinically useful estimation of the shape, motion, and deformation of the left ventricle of a heart (LV) is an important yet open research problem. Recently, computer vision techniques for reconstructing the 3D shape and motion of the LV have been developed. The main drawback of these techniques, however, is that models are formulated in terms of either too many local parameters that require non-trivial processing to be useful for close to real time diagnosis, or too few parameters to offer an adequate approximation to the LV motion.

To address the problem, we present a new class of volumetric primitives for a compact and accurate LV shape representation in which model parameters are functions. Lagrangian dynamics are employed to convert geometric models into dynamic models that can deform according to the forces manifested in the data points. It is thus possible to make a precise estimation of the deformation of the LV shape (endocardial, epicardial and anywhere in between) with a small number of intuitive parameter functions.

We believe that the proposed technique has a wide range of potential applications. In this thesis, we demonstrate the possibility by applying it to the 3-D LV shape and motion characterization from magnetic tagging data (MRI-SPAMM). We show that the results of our experiments with normal and abnormal heart data enable us to quantitatively verify the physicians' qualitative conception of the left ventricular wall motion.


MS-CIS-96-21

Adding Structure to Unstructured Data

Peter Buneman, Susan Davidson, Mary Fernandez, and Dan Suciu

We develop a new schema for unstructured data. Traditional schemas resemble the type systems of programming languages. For unstructured data, however, the underlying type may be much less constrained and hence an alternative way of expressing constraints on the data is needed. Here, we propose that both data and schema be represented as edge-labeled graphs. We develop notions of conformance between a graph database and a graph schema and show that there is a natural and efficiently computable ordering on graph schemas. We then examine certain subclasses of schemas and show that schemas are closed under query applications. Finally, we discuss how they may be used in query decomposition and optimization.


MS-CIS-96-22

Adaptive Correlation Tracking of Targets with Changing Scale

Ulf M. Cahn von Seelen, Ruzena Bajcsy

Algorithms for tracking targets imaged through a zoom lens must accommodate changes in the magnification of the target. This requirement poses particular problems for correlation techniques, which usually are not invariant to scale changes. An adaptive correlation method has been developed that selectively updates the correlation template in response to scale changes in an image sequence. The algorithm estimates a subset of the parameters of the affine transformation between the template and the matched image patch and updates the template only when the scaling exceeds given bounds. The selective template update enables correlation to track targets at varying scale while decreasing the risk of template drift.


MS-CIS-96-25

An Easy Case of Sorting by Reversals

Nicholas Tran

We show that sorting by reversals can be performed in polynomial time when the number of breakpoints is twice the distance.


MS-CIS-96-26

Pipeline Rendering: Interaction and Realism Through Hardware-Based Multi-Pass Rendering (Dissertation)

Paul Joseph Diefenbach

While large investments are made in sophisticated graphics hardware, most realistic rendering is still performed off-line using ray trace or radiosity systems. A coordinated use of hardware-provided bitplanes and rendering pipelines can however, approximate ray trace quality illumination effects in a user-interactive environment, as well as provide the tools necessary for a user to declutter such a complex scene. A variety of common ray trace and radiosity illumination effects are presented using multi-pass rendering in a pipeline architecture. We provide recursive refelections through the use of secondary viewpoints, and present a method for using a homogeneous 2-D projective image mapping to extend this method for refractive transparent surfaces. This paper then introduces the Dual Z-buffer, or DZ-buffer, an evolutionary hardware extension which, along with current frame-buffer functions such as stencil planes and accumulation buffers, provides the hardware platform to render non-refractive transparent surfaces in a back-to- front or front-to-back order. We extend the traditional use of shadow volumes to provide reflected and refracted shadows as well as specular light reclassification. The shadow and lighting effects are then incorporated into our recursive viewpoint paradigm. Global direct illumination is provided through a shadow blending technique. Hardware surface illumination is fit to a physically-based BRDF to provide a better local direct model, and the framework permits incorporation of a radiosity solution for indirect illumination as well. Additionally, we incorporate material properties including translucency, light scattering, and non-uniform transmittance to provide a general framework for creating realistic renderings. The DZ-buffer also provides decluttering facilities such as transparency and clipping. This permits selective scene viewing through arbitrary view-dependent and non- planar clipping and transparency surfaces in real-time. The combination of these techniques provide for understandable, realistic scene rendering at typical rates 5-50 times that of a comperable ray trace images. In addition, the pixel-parallel nature of these methods leads to exploration of further hardware rendering engine extensions which can exploit this coherence.


MS-CIS-96-27

A Lexical Theory of Quantification in Ambiguous Query Interpretation (Dissertation)

Jong Cheol Park

Although the connection between natural language syntax and semantics has received serious attention in both linguistics and computational linguistics for the last several decades, it does not appear that it has yet been entirely satisfactorily identified. The present dissertation focuses on quantifier scope ambiguity in an attempt to identify such a connection. We show that there are some readings that are incorrectly allowed by the theories and that other readings that are available are allowed for the wrong reason.

First, we distinguish referential NP interpretations from quantificational NP interpretations. Most traditional theories of scope do not, and they are shown to significantly overgenerate readings and/or miss a crucial generalization regarding quantificationally available readings. We present a hypothesis based on the notion of surface constituency to predict quantificationlly available readings. The hypothesis is tested on core English constructions, including transitive verbs, dative alternation (ditransitive) verbs, attitude verbs, complex NPs containing prepositional phrases, possessives, and subject or non-subject Wh-relatives (also with pied-piping), and varoius coordinate structures. We argue that the scopings allowed under the hypothesis are the ones that are available.

We then present a competence theory of quantifier scope, couched in a combinatory categorial grammar framework. The theory defines the connection between syntax and semantics in a precise way, utilizing the dual quantifier representation. We show theoretical predications on the core English constructions, and verify that the theoretical predications are consistent with the predications made by the hypothesis and that there are further reasonable theoretically predicted readings.

Finally, we describe an implementation of the theory in Prolog. The implemented system takes English sentences as ambiguous queries (regarding scope), generates logical forms that are associated with them, and evaluates those logical forms with respect to a predefined datebase of facts. The system also works as a proof-checker of the theory.


MS-CIS-96-28

Quantifier Scope, Lexical Semantics, and Surface Structure Constituency

Jong C. Park

We present a novel conjecture concerning the scope ambiguities that arise in sentences including multiple non-referential quantifiers. We claim that many existing theories of the phenomenon fail to correctly limit the set of readings that such sentences engender by failing to distinguish between referential and non-referential quantifiers. Once the distinction is correctly drawn, we show that surface syntax can be made, via an extended notion of surface constituency, to identify the set of available differently-scoped readings for such sentences. We examine various English constructions to show that the scopings predicted by the conjecture are the only ones that are available to human language understanders. We show how to incorporate this conjecture into a theory of quantifier scope, by couching it in a unification- based Combinatory Categorial Grammer framework and implementing it in SICStus Prolog. Finally, we compare the proposal with related approaches to quantifier scope ambiguity.


MS-CIS-96-29

Kinematic Control of Human Postures for Task Simulation

Xinmin Zhao

Kinematic control of human postures for task simulation is important in human factor analysis, simulation and training. It is a challenge to control the postures of a synthesized human figure in real-time on today's graphics workstations because the human body is highly articulated. In addition, we need to consider many spatial restrictions imposed on the human body while performing a task.

In this study, we simplfy the human posture control problem by decoupling the degrees of freedom (dof) in the human body. Based on the several decoupling schemes, we develop an analytical human posture control algorithm. This analytical algorithm has a number of advantages over existing methods. It eliminates the local minima problem, it is efficient enough to control whole human body postures in real-time, and it provides more effective and convenient control over redundant degrees of freedom. The limitation of this algorithm is that it cannot handle over-constrained problems or general constraint functions. To overcome this limitation, we transform the human posture control problem from a 40 variable joint space to a 4 to 9 redundancy parameter space. We then apply nonlinear optimization techniques on the transformed problem. Because the search space is reduced, this new numerical algorithm is more likely to find a solution than existing methods which apply optimization techniques directly in the joint space.

The contribtions of this thesis include a decoupling approach for simplifying the human posture control problem, an analytical human posture control algorithm based on this decoupling approach, and a numerical human posture control algorithm in redundancy parameter space. These two new algorithms are more efficient and effective than existing methods, and they also give the user control to select the desired solution. Moreover, the analytical algorithm can control postures of a few 92 dof human figures at 30 Hz.


MS-CIS-96-31

The Machine-Assisted Proof ofProgramming Language Properties (Dissertation)

Myra VanInwegen

The goals of the project described in this thesis are twofold. First, we wanted to demostrate that if a programming language has a semantics that is complete and rigorous (mathematical), but not too complex, then substantial theorems can be proved about it. Second, we wanted to assess the utility of using an automated theorem prover to aid in such proofs. We chose SML as the language about which to prove theorems: it has a published semantics that is complete and rigorous, and while not exactly simple, is comprehensible. We encoded the semantics of Core SML into the theorem prover HOL (creating new definitional packages for HOL in the process). We proved important theorems about evaluation and about the type system. We also proved the type preservation theorem, which relates evaluation and typing, for a good portion of the language. We were not able to complete the proof of type preservation because it is not true: we found counterexamples. These proofs demonstrate that a good semantics will allow the proof of programming language properties and allow the identification of trouble spots in the language. The use of HOL had its plusses and minuses. One the whole the benefits greatly outweigh the drawbacks, enough so that we believe that these theorems could not been proved in amount of time taken by this project had we not used automated help.


MS-CIS-96-32

Creating efficient fail-stop cryptographic protocols

Angelos D. Keromytis, Jonathan M. Smith

Fail-stop cryptographic protocols are characterized by the property that they terminate when an active attack is detected, rather than releasing information valuable to the attacker. Since such a construction forces attacks (other than denial-of-service) to be passive, the protocol designer's concerns can be restricted to passive attacks and malicious insiders. A significant advantage of such protocols is that by stopping and not attempting to recover, proofs about protocol behavior and security properties are greatly simplified.

This paper presents a generic method of converting any existing (cryptographic) protocol into a fail-stop one, or designing new protocols to be fail-stop. Our technique uses cryptographic hashes to validate sequences of messages by reflecting message dependencies in the hash values. An informal proof of correctness is given. We apply it to an early version of Netscape's Secure Socket Layer (SSL) cryptographic protocol. We also suggest a possible application to TCP streams as a high-preformance alternative to the per-packet authentication of IPSEC.

The modified protocols require small increases in message size and the number of cryptographic operations relative to the initial non-fail-stop protocols.


MS-CIS-96-33

Testing Real-Time Constraints (Dissertation)

Duncan Clarke

The verification of timing properties of real-time system models by traditional approaches that depend on the exploration of the entire system state space is impractical for large systems. In contrast, testing allows the search for violations of a property to be narrowed to a relatively small portion of the overall state space, based on assumptions regarding the structure of an implementation.

In a computer systems engineering context the primary advantages of testing over traditional state-space exploration approaches are two-fold. First, testing offers the possibility of model validation without state space explosion. Second, the tests themselves are produced as an artifact of model validation. Thus, the same tests that validated the model can be used to validate the hardware or software implementation that is created from it.

We present a framework for testing timing constraints of real-time systems. These tests are based on specifications of minimum and maximum allowable delays between input/output events in the execution of a system. We present specification and test languages and their formal semantics, and formalize a new hierarchy of test coverage criteria for domain testing of real-time properties. Based on these test coverage critera, techniques for automatically deriving optimized test suites are presented. Our testing framework and optimization techniques are illustrated with examples and a substantial case study.


MS-CIS-96-34

Protocol Boosters

D. C. Feldmeier, A. J. Mcauley, and J. M. Smith

MS-CIS-96-35

A Secure and Reliable Bootstrap Architecture

William A. Arbaugh, David J. Farber, and Jonathan M. Smith

In a computer system, the integrity of lower layers is treated as axiomatic by higher layers. Under the presumption that the hardware comprising the machine (the lowest layer) is valid, integrity of a layer can be guaranteed if and only if: (1) the integrity of the lower layers is checked, and (2) transitions to higher layers occur only after integrity checks on them are complete. The resulting integrity ``chain'' inductively guarantees system integrity.

When these conditions are not met, as they typically are not in the bootstrapping (initialization) of a computer system, no integrity guarantees can be made. Yet, these guarantees are increasingly important to diverse applications such as Internet commerce, intrusion detection systems, and ``active networks.'' In this paper, we describe the AEGIS architecture for initializing a computer system. It validates integrity at each layer transition in the bootstrap process. AEGIS also includes a recovery process for integrity check failures, and we show how this results in robust systems. We discuss our prototype implementation for the IBM personal computer (PC) architecture, and show that the cost of such system protection is surprisingly small.


MS-CIS-96-36

Computational Methods for Realistic Image Synthesis (Dissertation)

Min-Zhi Shao

In this thesis, we investigate the computational methods for both diffuse and general reflections in realistic image synthesis and propose two new approaches: the overrelaxation solution and the Bernstein polynomial solution.

One of the major concerns with the radiosity method is its expensive computing time and memory requirements. In this thesis, we analyze the convergence behavior of the progressive refinement radiosity method and propose two overrelaxation algorithms: the gathering and shooting solution and the positive overshooting solution. We modify the conventional shooting method to make the optimal use of the visibility information computed in each iteration. Based on a concise record of the history of the unshot light energy distribution, a solid convergence speed-up is achieved.

Though a great effort has been made to extend the radiosity method to accommodate general non-diffuse reflection, the current algorithms are still quite limited to simple environment settings. In this thesis, we propose using the piecewise spherical Bernstein basis functions over a geodesic triangulation to represent the radiance function. The representation is intrinsic to the unit sphere, and can be efficiently stored, evaluated, and subdivided by the numerically stable de Casteljau algorithm. We demonstrate that the computation of other fundamental radiometric quantities such as vector irradiance and reflected radiance can be reduced to the integration of the piecewise spherical Bernstein basis functions. A novel geometric integration algorithm based on adaptive domain subdivision is presented for the Bernstein- Bezier polynomials over a geodesic triangle on the unit sphere.


MS-CIS-96-37

Worlds of Change: Counterfactual Reasoning and Causation (Dissertation)

Charles L. Ortiz, Jr.

The development of a commonsense theory of causation has often been pursued along a number of, somewhat orthogonal, directions. The work contained herein examines the role that counterfactual reasoning can play within such a theory. The sorts of inferences one might draw in the course of causal attribution is first examined; this pre-theoretic analysis is conducted within the context of a rich microworld involving a number of agents engaged in purposeful behavior. A new theory of change, Explanatory Update Theory (EUT), is developed in which: (1) a syntactic version of belief updating underlies the semantics of counterfactuals and in which events and time are explicitly represented at the object level; (2) a new solution to the frame problem is presented based on a notion of support for a proposition which corrects a number of problems with existing approaches involving unexplainable events, the need for persistence rules, explanation in the context of incompletely specified causal chains, and event ramifications; (3) the solution to the frame problem is integrated with the segments for counterfactuals; and (4) the epistemic preferences that underly the choice of alternative worlds in accommodating a counterfactual supposition in the course of causal attribution is carefully articulated. EUT is then shown to correctly handle a number of standard benchmark problems as well as a number of new benchmarks emerging from the microworld study. A commonsense causal language is formalized consisting of terms such as preventing, enabling, maintaining, letting, helping, hindering, and so forth. The utility of counterfactual reasoning in supporting the production of causal explanations involving negative event descriptions is also demonstrated. A stratified view of causal reasoning is shown to emerge in which the utility of counterfactuals manifests itself in two ways. First, as useful tools in isolating the role that an event played in some nexus of events, and also as providing an efficient means for combining both causal and non-causal knowledge in the production of causal explanations.


MS-CIS-96-38

Switch Ware: Accelerating Network Evolution (White Paper)

J. M. Smith, D. J. Farber, C. A. Gunter, S. M. Nettles, D. C. Feldmeier, and W. D. Sincoskie

We propose the development of a set of software technologies (``SwitchWare'') which will enable rapid development and deployment of new network services. The key insight is that by making the basic network service selectable on a per user (or even per packet) basis, the need for formal standardization is eliminated. Additionally, by making the basic network service programmable, the deployment times, today constrained by capital funding limitations, are tremendously reduced (to the order of software distribution times). Finally, by constructing an advanced, robust programming environment, even the service development time can be reduced.

A SwitchWare switch consists of input and output ports controlled by a software-programmable element; programs are contained in sequences of messages sent to the SwitchWare switch's input ports, which interpret the messages as programs. We call these ``Switchlets''. This accelerates the pace of network evolution, as evolving user needs can be immediately reflected in the network infrastructure. Immediate reconfigurability enhances the adaptability of the network infrastructure in the face of unexpected situations. We call a network built from SwitchWare switches an active network.


MS-CIS-96-39

Eniac Quick Start

Timothy W. Rauenbusch

The Eniac is the world's first general purpose, electronic digital computer. The Quick Start guides the user through some example programs run on the Eniac, starting with the simplest techniques (the Eniac's equivalent of Hello World) and increasing in complexity. The Quick Start explains the general architecture of the Eniac, and the use of the Accumulators for addition and subtraction. It illustrates control structures through the use of the world's first electronic if statement and for loops.


Send comments/questions to tech-reports@central.cis.upenn.edu