CIS Seminars & Events

Spring 2017 Colloquium Series

Unless otherwise noted, our lectures are held weekly on Tuesday and/or Thursday from 3:00 p.m. to 4:15 p.m. in Wu and Chen Auditorium, Levine Hall.

Wednesday, January 18th

Hanna Wallach
Senior Research Microsoft New York
Berger Auditorium, Skirkanich Hall
3:00 pm - 4:15 pm
"Machine Learning for Social Science"

 

Read the Abstract and Bio

Abstract:
In this talk, I will introduce the audience to the emerging area of computational social science, focusing on how machine learning for social science differs from machine learning in other contexts. I will present two related models -- both based on Bayesian Poisson tensor decomposition -- for uncovering latent structure from count data. The first is for uncovering topics in previously classified government documents, while the second is for uncovering multilateral relations from country-to-country interaction data. Finally, I will talk briefly about the broader ethical implications of analyzing social data.

Bio:

Hanna Wallach is a Senior Researcher at Microsoft Research New York City and an Adjunct Associate Professor in the College of Information and Computer Sciences at the University of Massachusetts Amherst. She is also a member of UMass's Computational Social Science Institute. Hanna develops machine learning methods for analyzing the structure, content, and dynamics of social processes. Her work is inherently interdisciplinary: she collaborates with political scientists, sociologists, and journalists to understand how organizations work by analyzing publicly available interaction data, such as email networks, document collections, press releases, meeting transcripts, and news articles. To complement this agenda, she also studies issues of fairness, accountability, and transparency as they relate to machine learning. Hanna's research has had broad impact in machine learning, natural language processing, and computational social science. In 2010, her work on infinite belief networks won the best paper award at the Artificial Intelligence and Statistics conference; in 2014, she was named one of Glamour magazine's "35 Women Under 35 Who Are Changing the Tech Industry"; in 2015, she was elected to the International Machine Learning Society's Board of Trustees; in 2016, she was named co winner of the 2016 Borg Early Career Award; and in 2017, she will be program co-chair of the Neural Information Processing Systems conference. She is the recipient of several National Science Foundation grants, an Intelligence Advanced Research Projects Activity grant, and a grant from the Office of Juvenile Justice and Delinquency Prevention. Hanna is committed to increasing diversity and has worked for over a decade to address the underrepresentation of women in computing. She co-founded two projects---the first of their kind---to increase women's involvement in free and open source software development: Debian Women and the GNOME Women's Summer Outreach Program. She also co-founded the
annual Women in Machine Learning Workshop, which is now in its twelfth year.




 


 

January 26th

Edward Lee
Electrical Engineering and Computer Sciences
UC Berkeley
"Resurrecting Laplace's Demon: The Case for Deterministic Models

Read the Abstract and Bio

Abstract:
In this talk, I will argue that deterministic models have historically proved proved extremely valuable in engineering, despite fundamental limits. I examine the role that models play in engineering and contrast it with the role that they play in science, and I argue that determinism is an extraordinarily valuable property in engineering, even more than science. I will then show that deterministic models for cyber-physical systems, which combine computation with physical dynamics, remain elusive in practice. I will argue that the next big advance in engineering methods must include deterministic models for CPS, and I will show that such models are both possible and practical. I will then examine some fundamental limits of determinism, showing that chaos limits its utility for prediction, and that incompleteness means that at least for CPS, nondeterminism is inevitable.

Bio:
Edward A. Lee is the Robert S. Pepper Distinguished Professor in the Electrical Engineering and Computer Sciences (EECS) department at U.C. Berkeley. His research interests center on design, modeling, and analysis of embedded, real-time computational systems. He is the director of the nine-university TerraSwarm Research Center (http://terraswarm.org), a director of Chess, the Berkeley Center for Hybrid and Embedded Software Systems, and the director of the Berkeley Ptolemy project. From 2005-2008, he served as chair of the EE Division and then chair of the EECS Department at UC Berkeley. He is co-author of six books and hundreds of papers. He has led the development of several influential open-source software packages, notably Ptolemy and its various spinoffs. He received his BS degree in 1979 from Yale University, with a double major in Computer Science and Engineering and Applied Science, an SM degree in EECS from MIT in 1981, and a PhD in EECS from UC Berkeley in 1986. From 1979 to 1982 he was a member of technical staff at Bell Labs in Holmdel, New Jersey, in the Advanced Data Communications Laboratory. He is a co-founder of BDTI, Inc., where he is currently a Senior Technical Advisor, and has consulted for a number of other companies. He is a Fellow of the IEEE, was an NSF Presidential Young Investigator, won the 1997 Frederick Emmons Terman Award for Engineering Education, and received the 2016 Outstanding Technical Achievement and Leadership Award from the IEEE Technical Committee on Real-Time Systems (TCRTS).




 


 

January 31st

Margaret Martonosi
Department of Computer Science
Princeton University

Read the Abstract and Bio


Abstract:
Heterogeneous parallelism and specialization have become widely-used design levers for achieving high computer systems performance and power efficiency, from smartphones to datacenters. Unfortunately, heterogeneity greatly increases complexity at the hardware-software interface, and as a result, it brings increased challenges for software reliability, interoperability, and performance portability Over the past three years, my group has explored a set of issues for heterogeneously parallel systems, particularly related to specifying and verifying memory consistency models (MCMs), from high-level languages, down through compilers and operating systems and ISAs, and eventually to heterogeneous platforms comprised of CPUs, GPUs, and accelerators. The suite of tools we have developed (http://check.cs.princeton.edu ) offers comprehensive and fast analysis of memory ordering behavior across multiple system levels. As such, our tools have been used to find bugs in existing and proposed processors and in commercial compilers. They have also been used to identify shortcomings in the specifications of high-level languages (C++11) and instruction set architectures (RISC-V). Although memory models are traditionally considered nitpicky, boring, and even soul-crushing, my talk will show why they are of central importance for both hardware and software people today, and will also look forward to discuss future work related to MCM verification for accelerator-oriented parallelism and IoT devices.

Bio:
Margaret Martonosi is the Hugh Trumbull Adams '35 Professor of Computer Science at Princeton University, where she has been on the faculty since 1994. Martonosi's research focuses on computer architecture and mobile computing, particularly power-efficient systems. Past projects include the Wattch power modeling tool and the ZebraNet mobile sensor network, which was deployed for wildlife tracking in Kenya. Martonosi is a Fellow of both IEEE and ACM. Her major awards include Princeton University's 2010 Graduate Mentoring Award, the Anita Borg Institute's 2013 Technical Leadership Award, NCWIT's 2013 Undergraduate Research Mentoring Award, and ISCA’s 2015 Long-Term Influential Paper Award.




 


 

February 16th

Eleazer Eskin
Computer Science Department
Human Genetics
University of California, Los Angeles

Read the Abstract and Bio


Abstract:

Variation in human DNA sequences account for a significant amount of genetic risk factors for common disease such as hypertension, diabetes, Alzheimer's disease, and cancer. Identifying the human sequence variation that makes up the genetic basis of common disease will have a tremendous impact on medicine in many ways. Recent efforts to identify these genetic factors through large scale association studies which compare information on variation between a set of healthy and diseased individuals have been remarkably successful. However, despite the success of these initial studies, many challenges and open questions remain on how to design and analyze the results of association studies. Many of these challenges involve analysis of recently developed

but revolutionary sequencing technologies. In this talk, I will discuss a few of the computational and statistical challenges in the design and analysis of genetic studie

Bio:

Eleazar Eskin’s research focuses on developing computational methods for analysis of genetic variation.  He is currently a Professor in the Computer Science and Human Genetics departments at the University of California Los Angeles.  Previously, he was an Assistant Professor in Residence in Computer Science Engineering at the University of California, San Diego. Eleazar completed his Ph. D. in the Computer Science Department of Columbia University in New York City.  After graduation, he spent one year in the Computer Science Department at the Hebrew University in Jerusalem, Israel.




 


 

March 16th

Yingyu Liang   
Computer Science Dept.
Princeton University
"Theory for New Machine Learning Problems and Applications"

Read the Abstract and Bio


Abstract:

Machine learning has recently achieved great empirical success. This comes along with new challenges, such as sophisticated models that lack rigorous analysis, simple algorithms with practical success on hard optimization problems, and handling large scale datasets under resource constraints. In this talk, I will present some of my work in addressing such challenges.

This first part of the talk focuses on learning semantic representations for text data. Recent advances in natural language processing build upon the approach of embedding words as low dimensional vectors. The fundamental observation that empirically justifies this approach is that these vectors can capture semantic relations. A probabilistic model for generating text is proposed to mathematically explain this observation and existing popular embedding algorithms. It also reveals surprising connections to classical notions such as Pointwise Mutual Information, and allows to design novel, simple, and practical algorithms for applications such as sentence embedding.

In the second part, I will describe my work on distributed unsupervised learning over large-scale data distributed over different locations. For the prototypical tasks clustering, Principal Component Analysis (PCA), and kernel PCA, I will present algorithms that have provable guarantees on solution quality, communication cost nearly optimal in key parameters, and strong empirical performance. 

Bio:

Yingyu Liang is an associate research scholar in the Computer Science Department at Princeton University. His research interests are providing rigorous analysis for machine learning models and designing efficient algorithms for applications. He received a B.S. in 2008 and an M.S. in 2010 in Computer Science from Tsinghua University, and a Ph.D. degree in Computer Science from Georgia Institute of Technology in 2014. He was a postdoctoral researcher in 2014-2016 in the Computer Science Department at Princeton University.






 


 

March 20th

Osbert Bastani
Computer Science Dept.
Stanford University
"Beyond Deductive Inference in Program Analysis"

Read the Abstract and Bio


Abstract:

Program analysis tools help developers ensure the safety and robustness of software systems by automatically reasoning about the program. An important barrier to adoption is that these "automatic" tools oftentimes require costly inputs from the human using the analysis. For example, the user must annotate missing code (e.g., dynamically loaded or binary code) and additionally provide a specification encoding desired program behaviors. The focus of this research is to develop techniques that minimize this manual effort using techniques from machine learning. First, in the verification setting (where the goal is to prove a given correctness property), I describe an algorithm that interacts with the user to identify all relevant annotations for missing code, and show that empirically, the required manual effort is substantially reduced. Second, in the bug-finding setting, I describe an algorithm that improves the ability of random testing by automatically inferring the program input language, and show that it generates much higher quality valid test cases compared to a baseline.

Bio:
Osbert Bastani is a Ph.D. student in Computer Science at Stanford University advised by Alex Aiken. He is interested in improving the automation of program analysis tools using techniques from machine learning, artificial intelligence, and program synthesis. His work is motivated by applications to building secure systems, and analyzing software systems that rely on machine learning models.





 


 

March 21st

Bharath Hariharan
Post-doctoral Researcher
Facebook
"Towards versatile visual recognition systems"

Read the Abstract and Bio


Abstract:
To be useful to downstream applications, visual recognition systems have to solve a diverse array of tasks: they need to recognize a large number of categories, localize instances of these categories precisely in the visual field, estimate their pose accurately and so on. This set of requirements is also not fixed a priori and can change over time, requiring recognition systems to learn new tasks quickly and with minimal training. In contrast, visual recognition systems today only produce a shallow understanding of images, restricted to recognizing categories in an image, and expanding this shallow understanding requires the expensive collection of training data.
In this talk I will describe my work on removing this shortcoming. I will show we can build recognition systems that produce richer outputs, such as pixel-precise localization of detected objects, and how we can make progress towards making these systems capable of visual reasoning. Building models for these new goals requires a lot of training data. To reduce this requirement, I will present ways of leveraging past visual experience to learn new tasks, such as recognizing unseen categories, from very little data.

short bio:

I am currently a post-doctoral scholar in Facebook AI Research (FAIR). Before joining FAIR, I did my PhD with Prof. Jitendra Malik at the University of California, Berkeley, where I was awarded the Berkeley Fellowship and the Microsoft Research Fellowship. My interests are in object recognition in computer vision and machine learning.





 


 

March 23rd

Ilya Razenshteyn
MIT CSAIL
'"New Algorithms for High-Dimensional Data"
Skirkanich Hall, Berger Auditorium Rm 13
3:00 pm - 4:15 pm

Read the Abstract and Bio


Abstract:

A popular approach in data analysis is to represent a dataset in a high-dimensional feature space, and reduce a given task to a geometric computational problem. However, most of the classic geometric algorithms scale poorly as the dimension grows and are typically not applicable to the high-dimensional regime. This necessitates the development of new algorithmic approaches that overcome this "curse of dimensionality". In this talk I will give an overview of my work in this area.

* I will describe new algorithms for the high-dimensional Nearest Neighbor Search (NNS) problem, where the goal is to preprocess a dataset so that, given a query object, one can quickly retrieve one or several closest objects from the dataset. Our algorithms improve, for the first time, upon the popular Locality-Sensitive Hashing (LSH) approach.

* Next, I will show how to make several algorithmic ideas underlying the above theoretical results practical. This yields an implementation which is competitive with the state of the art heuristics for the NNS problem. The implementation is a part of FALCONN: a new open-source library for similarity search.

* Finally, I will talk about limits of distance-preserving sketching, i.e., compression methods that approximately preserve the distances between high-dimensional vectors. In particular, we will show that for the well-studied Earth Mover Distance (EMD) such efficient compression is impossible.

The common theme that unifies these and other algorithms for high-dimensional data is the use of "efficient data representations": randomized hashing, sketching, dimensionality reduction, metric embeddings, and others. One goal of the talk is to describe a holistic view of all of these techniques.

Bio:

Ilya Razenshteyn is a graduate student at the Theory of Computation group of MIT CSAIL advised by Piotr Indyk. He is broadly interested in the theory of algorithms for massive data with a bias towards algorithms which have the potential of being useful for applications. More specific interests include: similarity search, sketching, metric embeddings, high-dimensional geometry, streaming algorithms, and compressed sensing. Ilya graduated with M.Sc. in Mathematics from Moscow State University back in 2012. His awards include Akamai Presidential Fellowship and Simons Foundation Junior Fellowship.




 


 

March 27th

Christina Garman
Department of Computer Science
Johns Hopkins University
"Securing Deployed Cryptographic Systems"

Read the Abstract and Bio

Abstract:

In 2015 more than 150 million records and $400 billion were lost due to publicly-reported criminal and nation-state cyberattacks in the United States alone.  The failure of our existing security infrastructure motivates the need for improved technologies, and cryptography provides a powerful tool for doing this.  There is a misperception that the cryptography we use today is a "solved problem" and the real security weaknesses are in software or other areas of the system.  This is, in fact, not true at all, and over the past several years we have seen a number of serious vulnerabilities in the cryptographic pieces of systems, some with large consequences.

In this talk I will discuss two aspects of securing deployed cryptographic systems.  I will first talk about the evaluation of systems in the wild, using the example of how to efficiently and effectively recover user passwords submitted over TLS encrypted with RC4, with applications to many methods of web authentication as well as the popular IMAP protocol for email.  I will then address my work on developing tools to design and create cryptographic systems and bridge the often large gap between theory and practice by introducing AutoGroup+, a tool that automatically translates cryptographic schemes from the mathematical setting used in the literature to that typically used in practice, giving both a secure and optimal output.

Bio:

Christina Garman is a Ph.D. student at Johns Hopkins University where she is advised by Professor Matthew Green. Her research interests focus largely on practical and applied cryptography. More specifically, her work has focused on the security of deployed cryptographic systems from all aspects, including the evaluation of real systems, improving the tools that we have to design and create them, and actually creating real, deployable systems. Some of her recent work has been on demonstrating flaws in Apple’s iMessage end to end encryption, cryptographic automation, decentralized anonymous e-cash, and decentralized anonymous credentials.  Her work has been publicized in The Washington Post, Wired, and The Economist, and she received a 2016 ACM CCS Best Paper Award.



 


 

March 30th

Prashant Nair
Computer Architecture
Georgia Tech
Learning to Live with Errors: Architectural Solutions for Memory Reliability at Extreme Scaling"
Skirkanich Hall, Berger Auditorium Rm 13
3:00 pm - 4:15 pm

Read the Abstract and Bio


Abstract:

High capacity and scalable memory systems play a vital role in enabling our desktops, smartphones, and pervasive technologies like Internet of Things (IoT). Unfortunately, memory systems are becoming increasingly prone to faults. This is because we rely on technology scaling to improve memory density, and at small feature sizes, memory cells tend to break easily. Today, memory reliability is seen as the key impediment towards using high-density devices, adopting new technologies, and even building the next Exascale supercomputer. To ensure even a bare-minimum level of reliability, present-day solutions tend to have high performance, power and area overheads. Ideally, we would like memory systems to remain robust, scalable, and implementable while keeping the overheads to a minimum. In this talk, I will discuss how simple cross-layer architectural techniques can provide orders of magnitude higher reliability and enable seamless scalability with negligible overheads. I will also highlight how the fundamentals of memory reliability techniques can be extended into the domains of Security, Low-Power IoT radio controllers, and Quantum Accelerators.

Bio:

Prashant J. Nair is a Ph.D. candidate in Georgia Institute of Technology where he is advised by Professor Moinuddin Qureshi. Prior to this, he received his MS (2011-2013) from Georgia Institute of Technology and his BE in Electronics Engineering (2005-2009) from University of Mumbai. His research interests include reliability, performance, power and refresh optimizations for current and upcoming memory systems. In these areas, he has authored and co-authored 9 papers in top-tier venues such as ISCA, MICRO, HPCA, ASPLOS and DSN. To highlight the importance of memory reliability to the academic community, he also organized the “Memory Reliability Forum” that was co-located with HPCA-2016. He has served as the Submissions Co-chair of MICRO 2015 and in the ERC of ISCA 2016. During the course of his Ph.D., he has also interned at several reputed industrial labs including AMD, Samsung, Intel and IBM.






 


 

April 4th

Ali Jose' Mashtizadeh
Computer Science Department
Stanford University
"Systems and Tools for Reliable Software: Replication, Reproducibility, and Security"

Read the Abstract and Bio


Abstract:

The past decade has seen a rapid acceleration in the development of new and transformative applications in many areas including transportation, medicine, finance, and communication. Most of these applications are made possible by the increasing diversity and scale of hardware and software systems.

While this brings unprecedented opportunity, it also increases the probability of failures and the difficulty of diagnosing them. Increased scale and transience has also made management increasingly challenging. Devices can come and go for a variety of reasons including mobility, failure and recovery, and scaling capacity to meet demand.

In this talk, I will be presenting several systems that I built to address the resulting challenges to reliability, management, and security.

Ori is a reliable distributed file system for devices at the network edge. Ori automates many of the tasks of storage reliability and recovery through replication, taking advantage of fast LANs and low cost local storage in edge networks.

Castor is record/replay system for multi-core applications with predictable and consistently low overheads. This makes it practical to leave record/replay on in production systems, to reproduce difficult bugs when they occur, and to support recovering from hardware failures through fault tolerance.

Cryptographic CFI (CCFI) is a dynamic approach to control flow integrity. Unlike previous CFI systems that rely purely on static analysis, CCFI can classify pointers based on dynamic and runtime characteristics. This limits the attacks to only actively used code paths, resulting in a substantially smaller attack surface.

Bio

Ali is currently completing his PhD at Stanford University where he is advised by Prof. David Mazières.  His work focuses on improving reliability, ease of management and security in operating systems and distributed systems.  Previously, he was a Staff Engineer at VMware, Inc. where he was the technical lead for the live migration products.  Ali received an M.Eng. in electrical engineering and computer science and a B.S. in electrical engineering from the Massachusetts Institute of Technology.






 


 

April 6th

Dorsa Sadigh
Electrical Engineering and Computer Science
University of California-Berkeley
"Towards a Theory of Safe and Interactive Autonomy"

Read the Abstract and Bio


Abstract:

Today’s society is rapidly advancing towards cyber-physical systems (CPS) that interact and collaborate with humans, e.g., semi-autonomous vehicles interacting with drivers and pedestrians, medical robots used in collaboration with doctors, or service robots interacting with their users in smart homes. The safety-critical nature of these systems requires us to provide provably correct guarantees about their performance in interaction with humans. The goal of my research is to enable such human-cyber-physical systems (h-CPS) to be safe and interactive. I aim to develop a formalism for design of algorithms and mathematical models that facilitate correct-by-construction control for safe and interactive autonomy.

In this talk, I will first discuss interactive autonomy, where we use algorithmic human-robot interaction to be mindful of the effects of autonomous systems on humans, and further leverage these effects for better safety, efficiency, coordination, and estimation. I will then talk about safe autonomy, where we provide correctness guarantees, while taking into account the uncertainty arising from the environment. Further, I will discuss a falsification algorithm to show robustness with respect to learned human models. While the algorithms and techniques introduced can be applied to many h-CPS applications, in this talk, I will focus on the implications of my work for semi-autonomous driving.

Bio:

Dorsa Sadigh is a Ph.D. candidate in the Electrical Engineering and Computer Sciences department at UC Berkeley. Her research interests lie in the intersection of control theory, formal methods, and human-robot interactions. Specifically, she works on developing a unifying framework for safe and interactive autonomy. Dorsa received her B.S. from Berkeley EECS in 2012. She was awarded the NDSEG and NSF graduate research fellowships in 2013. She was the recipient of the 2016 Leon O. Chua department award and the 2011 Arthur M. Hopkin department award for achievement in the field of nonlinear science, and she received the Google Anita Borg Scholarship in 2016.






 


 

April 11th

Kaushik Roy
Electrical and Computer Engineering
Purdue University
"Approximate Computing for Energy-efficient Error-resilient Multimedia Systems"

Read the Abstract and Bio


Abstract:

In today’s world there is an explosive growth in digital information content. Moreover, there is also a rapid increase in the number of users of multimedia applications related to image and video processing, recognition, mining and synthesis. These facts pose an interesting design challenge to process digital data in an energy-efficient manner while catering to desired user quality requirements. Most of these multimedia applications possess an inherent quality of "error"-resilience. This means that there is considerable room for allowing approximations in intermediate computations, as long as the final output meets the user quality requirements. This relaxation in "accuracy" can be used to simplify the complexity of computations at different levels of design abstraction, which directly helps in reducing the power consumption. At the algorithm and architecture levels, the computations can be divided into significant and non-significant. Significant computations have a greater impact on the overall output quality, compared to non-significant ones. Thus the underlying architecture can be modified to promote faster computation of significant components, thereby enabling voltage-scaling (at the same operating frequency). At the logic and circuit levels, one can relax Boolean equivalence to reduce the number of transistors and decrease the overall switched capacitance. This can be done in a controlled manner to introduce limited approximations in common mathematical operations like addition and multiplication. All these techniques can be classified under the general topic of “Approximate Computing”, which is the main focus of this talk.

Bio

Kaushik Roy received B.Tech. degree in electronics and electrical communications engineering from the Indian Institute of Technology, Kharagpur, India, and Ph.D. degree from the electrical and computer engineering department of the University of Illinois at Urbana-Champaign in 1990. He was with the Semiconductor Process and Design Center of Texas Instruments, Dallas, where he worked on FPGA architecture development and low-power circuit design. He joined the electrical and computer engineering faculty at Purdue University, West Lafayette, IN, in 1993, where he is currently Edward G. Tiedemann Jr. Distinguished Professor. His research interests include spintronics, device-circuit co-design for nano-scale Silicon and non-Silicon technologies, low-power electronics for portable computing and wireless communications, and new computing models enabled by emerging technologies. Dr. Roy has published more than 700 papers in refereed journals and conferences, holds 15 patents, supervised 75 PhD dissertations, and is co-author of two books on Low Power

CMOS VLSI Design (John Wiley & McGraw Hill).







 


 

April 13th

Yuhao Zhu
Computer Architecture
University of Texas, Austin
"Energy-Efficient Mobile Web Computing"

Read the Abstract and Bio


Abstract:

Mobile computing is experiencing a technological renaissance, and the Web is Florence. Throughout the past decade, the Web has redefined the way people retrieve information, communicate with one another, and extract insights. Although rife with opportunity, the energy-constrained nature of mobile devices is a major roadblock to the potential that next-generation Web technologies promise. In this talk, I will show a path for achieving an energy-efficient mobile Web by rethinking the conventional abstractions across the hardware/software interface along with deep introspection of Web domain knowledge. In particular, I will describe an energy-efficient mobile processor architecture specialized for Web technologies as well as programming language support that empowers Web developers to make calculated trade-offs between energy-efficiency and end-user quality-of-service. Together, they form the core of my hardware-software co-design philosophy toward the next major milestone of the Web evolution: the Watt-Wise Web. As computing systems in the Internet-of-things era increasingly rely on fundamental Web technologies while operating under even more stringent energy constraints, Watt-Wise Web is here to stay.

 

Bio:

Yuhao Zhu is a Visiting Research Fellow in the School of Engineering and Applied Sciences at Harvard University and a final year Ph.D. candidate in the Department of Electrical and Computer Engineering at The University of Texas at Austin. He is interested in designing and prototyping better hardware and software systems to make next-generation edge and cloud computing fast, energy-efficient, intelligent, and safe. His dissertation work on energy-efficient mobile computing has been supported by the Google Ph.D. fellowship. His paper awards include the Best of Computer Architecture Letters in 2014 and IEEE Micro TopPicks in Computer Architecture Honorable Mention in 2016.






 


 

April 18th

Aasheesh Kolli
Computer Engineering Lab
University of Michigan
"Architecting Persistent Memory Systems"

Read the Abstract and Bio


Abstract:
Persistent Memory (PM) technologies (also known as Non-Volatile RAM, e.g., Intel’s 3D XPoint) offer the exciting possibility of disk-like durability with the performance of main memory. Persistent memory systems provide applications with direct access to storage media via processor load and store instructions rather than having to rely on performance sapping software intermediaries like the operating system, aiding the development of high-performance, recoverable software. For example, I envision storage software that provides the safety and correctness of a conventional database management system like PostgreSQL and the performance of an in-memory store like Redis. However, today’s computing systems have been optimized for block storage devices and cannot fully exploit the benefits of PMs. Designing efficient systems for this new storage paradigm requires a careful rethink of computer architectures, programming interfaces, and application software.

While maintaining recoverable data structures in main memory is the central appeal of persistent memories, current systems do not provide efficient mechanisms (if any) to do so. Ensuring the recoverability of these data structures requires constraining the order of PM writes, whereas current architectures are designed to reorder memory accesses, transparent to the programmer, for performance. In this talk, I will introduce recently proposed programming interfaces, called persistency models, that will allow programmers to express the required order of PM writes. Then, I will present my work on developing efficient hardware implementations to enforce the PM write order prescribed by persistency models and tailoring software for these new programming interfaces.

Bio: Aasheesh Kolli is a doctoral candidate in Computer Science and Engineering at the University of Michigan. He investigates application software, programming interfaces, and computer architectures in light of emerging persistent memory technologies. His work has resulted in multiple research papers, including a best paper nomination, at venues like the International Symposium on Microarchitecture (MICRO) and the International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS).

--






 


 

April 20th

Wyatt Lloyd
Computer Science Dept.
University of South Carolina
"Low Latency and Strong Guarantees for Scalable Storage"

Read the Abstract and Bio


Abstract:

Scalable storage systems, where data is sharded across many machines, are necessary to support web services whose data is too large for a single machine to handle.  An ideal system would provide the lowest latency—to make the web services built on top of it fast—and the strongest guarantees—to make programming the web service easier.  Theoretical results prove that such an ideal system is impossible, but all hope is not lost!  Our work has made progress on this problem from both directions: providing stronger guarantees for low latency systems and providing lower latency for systems with strong guarantees.  I will cover one of these systems, Rococo, in detail.  I will also touch on our recent impossibility result, the SNOW Theorem, and how it guided us in building new systems with latency-optimal read-only transactions.

Bio

Wyatt Lloyd is a third-year Assistant Professor of Computer Science at the University of Southern California.  His research interests include the theory, design, implementation, evaluation, and deployment of large-scale distributed systems.  He received his Ph.D. from Princeton University in 2013 for the COPS and Eiger systems, which demonstrated stronger semantics were compatible with low latency for scalable geo-replicated storage.  He then spent a year as a Postdoctoral Researcher at Facebook, and he continues to collaborate with its engineers on projects related to media processing, storage, and delivery.






 


 

April 27th

Zvi Galil
The John P. Imlay Jr. Dean of Computing and Professor
Georgia Tech
"Georgia Tech's Online MOOC-based Master Program"

Read the Abstract and Bio


Abstract:

The first-of-its-kind program was launched in January 2014 and has sparked a worldwide conversation about higher education in the 21st century. President Barack Obama has praised OMS CS by name twice, and over 1,000 news stories mentioned the programs. It’s been described as a potential "game changer" and "the first real step in the transformation of higher education in the US.” Harvard University researchers concluded that OMSCS is “the first rigorous evidence … showing an online degree program can increase educational attainment” and predicted that OMSCS will singlehandedly raise the number of annual MS CS graduates in the United States by at least 7 percent.

To ensure program quality and rigor, Georgia Tech started in 2014 with small enrollment of 380; in January 2017, enrollment exceeded 4,500. So far 277 students have graduated from OMSCS, and another 300+ have registered to graduate in Spring 2017. The program has also paved the way for a number of similar, MOOC-based MS programs.

The talk will describe the OMSCS program, how it came about, its first three years, and what Georgia Tech has learned from the OMSCS experience. We will also discuss its potential effect on higher education.

Bio:

Dr. Zvi Galil, Dean of the College of Computing, Georgia Institute of Technology, was born in Tel-Aviv, Israel. He earned BS and MS degrees in Applied Mathematics from Tel Aviv University, both summa cum laude. He then obtained a PhD in Computer Science from Cornell University. After a post-doctorate in IBM's Thomas J. Watson research center, he returned to Israel and joined the faculty of Tel-Aviv University. He served as the chair of the Computer Science department in 1979-1982.

In 1982 he joined the faculty of Columbia University. He served as the chair of the Computer Science Department in 1989-1994 and as dean of The Fu Foundation School of Engineering & Applied Science in 1995-2007. Galil was appointed Julian Clarence Levi Professor of Mathematical Methods and Computer Science in 1987, and Morris and Alma A. Schapiro Dean of Engineering in 1995. In 2007 Galil returned to Tel Aviv University and served as president. In 2009 he resigned as president and returned to the faculty as a professor of Computer Science. In July 2010 he became The John P. Imlay, Jr. Dean of Computing at Georgia Tech.

Dr. Galil's research areas have been the design and analysis of algorithms, complexity, cryptography and experimental design. In 1983-1987 he served as chairman of ACM SIGACT, the Special Interest Group of Algorithms and Computation Theory. He has written over 200 scientific papers, edited 5 books, and has given more than 200 lectures in 20 countries. Galil has served as editor in chief of two journals and as the chief computer science adviser in the United States to the Oxford University Press. He is a fellow of the ACM and the American Academy of Arts and Sciences and a member of the National Academy of Engineering. In 2008 Columbia University established the Zvi Galil Award for Improvement in Engineering Student Life. In 2009 the Columbia Society of Graduates awarded him the Great Teacher Award. In 2012 the University of Waterloo awarded him an honorary doctorate in mathematics. Zvi Galil is married to Dr. Bella S. Galil, a marine biologist. They have one son, Yair, a corporate lawyer in New York.