## How to Share a data structure: A Fast Timing-based Solution

*Rajeev Alur and
Gadi Taubenfeld*
We consider the problem of transforming a given sequential implementation
of a data structure into a wait-free concurrent implementation.
Given the code for different operations of an object
that is designed to work under the
assumption that only a single process accesses it, we want to construct an
implementation that works correctly in a concurrent environment where it may be accessed by
many processes.

We assume a shared memory model with
atomic registers, that is, registers that support
either a read or a write in one atomic step.
It is well known that using atomic registers it is impossible to construct
concurrent implementations of even very simple objects such as test-and-set bits.
However, we show that the knowledge about relative speeds of processes {\em can}
be used for such implementations.
We assume that there is a known upper bound on the time taken by the
slowest process to execute a statement involving an access to the shared
memory.
This timing assumption is very powerful and enables us to
construct wait-free implementations
of data structures such as queues, stacks and synchronization primitives such as
test-and-set, compare-and-swap, fetch-and-add, etc.

Our transformation works only when the given sequential implementation is bounded, that is,
there is a known upper bound on the number of steps
required to complete any of the operations it supports.
Our transformation ensures that in the absence of contention
there is only a small overhead in the cost of executing the concurrent operations
over the sequential ones, namely, only a constant number of accesses to the
shared memory.

Proceedings of the Fifth IEEE Symposium on Parallel and Distributed Processing (SPDP 1993), pp. 470-477.