This symposium will honor one of the pioneering figures in the development of the modern microprocessor. Presenters will describe their thoughts on the prospects for continuing advances in computer performance in the face of scaling laws and technological advances.

WEDNESDAY APRIL 20, noon-3PM

Levine Hall, Room 307
3330 Walnut Street
Philadelphia, PA 19104-6389

Program Schedule

  • noon - 12:05 : Welcome
  • 12:05 - 12:35 : Joseph Devietti
  • 12:35 - 1:35 : Wen-Mei Hwu
  • 1:35 - 1:40 : break
  • 1:40 - 2:10 : Andre DeHon
  • 2:10 - 3:10 : Yale Patt

Speakers

Yale N. Patt

University of Texas at Austin

The END of X, the BEGINNING of Y and what they mean for future microprocessors

The end of Moore's Law we have been hearing about for 30 years. Another ten years and it will probably happen. What will that mean? More recently, there has been the suggestion that the days of the Von Neumann machine are numbered. In fact, we debated that notion at Micro in Cambridge recently, only to realize that most people predicting the demise of Von Neumann don't even know what a Von Neumann machine is. Finally, we have already seen the end of Dennard Scaling and its influence on microprocessor design.  But there is no vacuum when it comes to microprocessor hype. Dark silicon, quantum computers, approximate computing, machine learning have all rushed in to fill the void.  What I will try to do in this talk is examine each of these items beyond the catch phrases we are constantly deluged with. Is any of it relevant to the design of future microprocessors? ...and why is the transformation hierarchy more critical than ever?

Wen-Mei Hwu

University of Illinois Urbana-Champaign

Parallelism, Heterogeneity, Locality, why bother?

Computing systems have become power-limited as the Dennard scaling got off track in early 2000. In response, the industry has taken a path where application performance and power efficiency can vary by more than two orders of magnitude depending on their parallelism, heterogeneity, and locality. Since then, we have built heterogeneous top supercomputers. Most of the top supercomputers in the world are heterogeneous parallel computing systems. We have mass-produced heterogeneous mobile computing devices. New standards such as the Heterogeneous Systems Architecture (HSA) are emerging to facilitate software development. Much has been learned about of algorithms, languages, compilers and hardware architecture in this movement. Why do applications bother to use these systems? How hard is it to program these systems today? How will we programming these systems in the future? How will heterogeneity in memory devices present further opportunities and challenges? What is the impact on long-term software engineering cost on applications? In this talk, I will go over the lessons learned from educating programmers and developing performance-critical libraries. I will then give a preview of the types of programming systems that will be needed to further reduce the software cost of heterogeneous computing.

Joseph Devietti

University of Pennsylvania

Automatically Finding & Fixing Parallel Performance Bugs

Multicore architectures continue to pervade every part of our computing infrastructure, from servers to phones and smart watches. While these parallel architectures bring established performance and energy efficiency gains compared to single-core designs, parallel code written for these architectures can suffer from subtle performance bugs that are difficult to understand and repair with current tools. We'll discuss two systems that leverage hardware-software co-design to tackle false sharing performance bugs, in the context of both unmanaged languages like C/C++ and managed languages like Java. These systems use hardware performance counters for efficient bug detection, and novel runtime systems to repair bugs online without the need for programmer intervention.

Andre DeHon

University of Pennsylvania

Location, Location, Location -- The Role of Spatial Locality in Energy Minimizing Programmable Architectures

How do we minimize energy in computations?  Should we store data compactly in a central memory and bring it to a central location for processing (EDVAC, processors), or should we spatially distribute the computation near the data, moving data from the point of production to consumption as needed (ENIAC, FPGAs)? Are serial or parallel computations more energy efficient? We show there is an asymptotic energy advantage to aggressive exploitation of spatial locality. Our computations are already large enough to see the practical impact of this advantage, and the advantage widens as we continue to scale to larger capacity computational substrates.

Contact Info

Tel.: 215.898.0376
Email: cjtaylor@cis.upenn.edu

Sponsors

This symposium is presented by the Computer and Information Science Department of the University of Pennsylvania and the Franklin Institute.
Awards week is generously underwritten by TE Connectivity.