I work on making multiprocessors easier to program by leveraging changes in both computer architectures and parallel programming models.
I'm teaching CIS 371: Computer Organization and Design in Spring 2018.
Data races are one of the main culprits behind the complexity of multithreaded programming. Existing data race detectors require large amounts of metadata for each program variable to perform their analyses. The SlimFast system exploits the insight that there is a large amount of redundancy in this metadata: many program variables often have identical metadata state. By sharing metadata across variables, a large reduction in space usage can be realized. SlimFast uses immutable metadata to safely support metadata sharing across threads while also accelerating concurrency control. SlimFast’s lossless metadata compression achieves these benefits while preserving soundness and completeness.
Across a range of benchmarks from Java Grande, DaCapo, NAS Parallel Benchmarks and Oracle’s BerkeleyDB Java Edition, SlimFast is able to reduce memory consumption by 1.83x on average, and up to 4.90x for some benchmarks, compared to the state-of-the-art FastTrack system. By improving cache locality and simplifying concurrency control, SlimFast also accelerates data race detection by 1.40x on average, and up to 8.8x for some benchmarks, compared to FastTrack.
Correctly synchronizing multithreaded programs is challenging and errors can lead to program failures such as atomicity violations. Existing strong memory consistency models rule out some possible failures, but are limited by depending on programmer-defined locking code. We present the new Ordering-Free Region (OFR) serializability consistency model that ensures atomicity for OFRs, which are spans of dynamic instructions between consecutive ordering constructs (e.g., barriers), without breaking atomicity at lock operations. Our platform, Serializable Ordering-Free Regions for Increasing Thread Atomicity Scalably (SOFRITAS), ensures a C/C++ program’s execution is equivalent to a serialization of OFRs by default. We build two systems that realize the SOFRITAS idea: a concurrency bug finding tool for testing called SofriTest, and a production runtime system called SoPro.
SofriTest uses OFRs to find concurrency bugs, including a multi-critical-section atomicity violation in memcached that weaker consistency models will miss. If OFRs are too coarse-grained, SofriTest suggests refinement annotations automatically. Our software-only SoPro implementation has high performance, scales well with increased parallelism, and prevents failures despite bugs in locking code. SoPro has an average overhead of just 1.59x compared to pthreads, despite pthreads’ much weaker memory model.
A program is said to have a determinacy race if logically parallel parts of a program access the same memory location and one of the accesses is a write. These races are generally bugs in the program, as different schedules of the program can lead to different results. Most prior work on detecting these races focuses on a subclass of programs with series-parallel or nested parallelism. This paper presents a race-detection algorithm for detecting races in a more general class of programs, namely programs that include arbitrary ordering constraints in additional to the series-parallel constructs. Our race-detection algorithm performs a serial execution of the program, augmented to detect races, in O(T1 + k2) time, where T1 is the sequential running time of the original program and k is the number of non series-parallel constraints. The main technical novelty is a new data structure for answering reachability queries in nearly series-parallel (SP) directed acyclic graphs (DAGs). Given as input a graph comprising an n-node series parallel graph and k additional non-SP edges, the total construction time of the data structure is O(n + k2), and each reachability query can be answered in O(1) time. The data structure is traversally incremental, meaning that it supports the insertion of nodes/edges, but only as they are discovered through a graph traversal.
Achieving determinism on real software systems remains difficult. Everyday code interacts with a wide array of nondeterministic sources, including those internal to the system (OS system calls, CPU instructions, other processes), external to the system (time, date, network communication), and arising from software abstractions (concurrency, threaded execution, data races). Nondeterminism complicates many tasks, from achieving reliable software builds across systems to creating reproducible scientific results. Existing approaches to determinism enforcement assume source code is available or require changing the operating system. Several approaches have high overhead as runtime monitoring causes performance to suffer.
In this work we present DetFlow, a framework for writing and deploying new and legacy software which guarantees determinism. DetFlow uses a novel approach combining static, language-level guarantees with a lightweight runtime enforcement system. Applications that leverage DetFlow must have an entrypoint that lives in the DetIO monad, a type which requires all operations —- including I/O -— be deterministic. Furthermore, DetFlow allows the execution of arbitrary code not written in this framework by executing it in a determinizing runtime. This allows for batch processing tasks to be composed of otherwise untrusted external tasks in a way that assures correctness. Combining support for deterministic parallelism, filesystem access, and logging, DetFlow is an ideal platform for writing scripted workflows that process large data sets simultaneously. We show several use cases of DetFlow by applying it to bioinformatics data pipelines and software build systems. Our evaluation shows we can determinize existing software with minimal modifications, while preserving performance and exploiting software parallelism. We show that DetFlow makes it easier to discover nondeterminism and data races sooner, as DetFlow forces programmers to get reproducibility and parallelism right from the onset.
Cache contention in the form of false sharing and true sharing arises when threads overshare cache lines at high frequency. Such oversharing can reduce or negate the performance benefits of parallel execution. Prior systems for detecting and repairing cache contention lack efficiency in detection or repair, contain subtle memory consistency flaws, or require invasive changes to the program environment.
In this paper, we introduce a new way to combat cache line oversharing via the Thread Memory Isolation (TMI) system. TMI operates completely in userspace, leveraging performance counters and the Linux ptrace mechanism to tread lightly on monitored applications, intervening only when necessary. TMI’s compatible-by-default design allows it to scale to real-world workloads, unlike previous proposals. TMI introduces a novel code-centric consistency model to handle cross-language memory consistency issues. TMI exploits the flexibility of code-centric consistency to efficiently repair false sharing while preserving strong consistency model semantics when necessary.
TMI has minimal impact on programs without oversharing, slowing their execution by just 2% on average. We also evaluate TMI on benchmarks with known false sharing, and manually inject a false sharing bug into the leveldb key-value store from Google. For these programs, TMI provides an average speedup of 5.2x and achieves 88% of the speedup possible with manual source code fixes.
Data race detection is a useful dynamic analysis for multithreaded programs that is a key building block in record-and-replay, enforcing strong consistency models, and detecting concurrency bugs. Existing software race detectors are precise but slow, and hardware support for precise data race detection relies on assumptions like type safety that many programs violate in practice.
We propose PARSNIP, a fully precise hardware-supported data race detector. PARSNIP exploits new insights into the redundancy of race detection metadata to reduce storage overheads. PARSNIP also adopts new race detection metadata encodings that accelerate the common case while preserving soundness and completeness. When bounded hardware resources are exhausted, PARSNIP falls back to a software race detector to preserve correctness. PARSNIP does not assume that target programs are type safe, and is thus suitable for race detection on arbitrary code.
Our evaluation of PARSNIP on several PARSEC benchmarks shows that it incurs performance overheads from negligible to 2.6x, with an average overhead of just 1.5x. Moreover, PARSNIP outperforms the state-of-the-art RADISH hardware race detector by 4.6x.
Graphics Processing Units (GPUs) have become widespread and popular over the past decade. Fully utilizing the parallel compute and memory resources that GPUs present remains a significant challenge, however. In this paper, we describe GPUDrano: a scalable static analysis that detects uncoalesced global memory accesses in CUDA programs. Uncoalesced global memory accesses arise when a GPU program accesses DRAM in an ill-structured way, increasing latency and energy consumption. We formalize the GPUDrano static analysis and compare it empirically against a dynamic analysis to demonstrate that false positives are rare for most programs. We implement GPUDrano in LLVM and show that it can run on GPU programs of over a thousand lines of code. GPUDrano finds 133 of the 143 uncoalesced static memory accesses in the popular Rodinia GPU benchmark suite, demonstrating the precision of our implementation. Fixing these bugs leads to real performance improvements of up to 25%.
GPU programming models enable and encourage massively parallel programming with over a million threads, requiring extreme parallelism to achieve good performance. Massive parallelism brings significant correctness challenges by increasing the possibility for bugs as the number of thread interleavings balloons. Conventional dynamic safety analyses struggle to run at this scale.
We present Barracuda, a data race detector for GPU programs written in Nvidia’s CUDA language. Barracuda handles a wider range of parallelism constructs than previous work, including branch operations, low-level atomics and memory fences, which allows Barracuda to detect new classes of races. Barracuda operates at the binary level for increased compatibility with existing code, leveraging a new binary instrumentation framework that is extensible to other dynamic analyses. Barracuda incorporates a number of novel optimizations that are crucial for scaling data race detection to over a million threads.
As ever more computation shifts onto multicore architectures, it is increasingly critical to find effective ways of dealing with multithreaded performance bugs like true and false sharing. Previous approaches to fixing false sharing in unmanaged languages have had to resort to highly-invasive runtime program modification. We observe that managed language runtimes, with garbage collection and JIT code compilation, present unique opportunities to repair such bugs directly, mirroring the techniques used in manual repairs.
We present Remix, a modified version of the Oracle HotSpot JVM which can detect cache contention bugs and repair false sharing at runtime. Remix’s detection mechanism leverages recent performance counter improvements on Intel platforms, which allow for precise, unobtrusive monitoring of cache contention at the hardware level. Remix can detect and repair known false sharing issues in the LMAX Disruptor high-performance inter-thread messaging library and the Spring Reactor event-processing framework, automatically providing 1.5-2x speedups over unoptimized code and matching the performance of hand-optimization. Remix also finds a new false sharing bug in SPECjvm2008, and uncovers a true sharing bug in the HotSpot JVM that, when fixed, improves the performance of three NAS Parallel Benchmarks by 7-25x. Remix incurs no statistically-significant performance overhead on other benchmarks that do not exhibit cache contention, making Remix practical for always-on use.
Contention for shared memory, in the forms of true sharing and false sharing, is a challenging performance bug to discover and to repair. Understanding cache contention requires global knowledge of the program's actual sharing behavior, and can even arise invisibly in the program due to the opaque decisions of the memory allocator. Previous schemes have focused only on false sharing, and impose significant performance penalties or require non-trivial alterations to the operating system or runtime system environment.
This paper presents the Light, Accurate Sharing dEtection and Repair (LASER) system, which leverages new performance counter capabilities available on Intel's Haswell architecture that identify the source of expensive cache coherence events. Using records of these events generated by the hardware, we build a system for online contention detection and repair that operates with low performance overhead and does not require any invasive program, compiler or operating system changes. Our experiments show that LASER imposes just 2% average runtime overhead on the Phoenix, Parsec and Splash2x benchmarks. LASER can automatically improve the performance of programs by up to 19% on commodity hardware.